2026-04-13 00:00:06.835508 | Job console starting 2026-04-13 00:00:06.873544 | Updating git repos 2026-04-13 00:00:07.092225 | Cloning repos into workspace 2026-04-13 00:00:07.319017 | Restoring repo states 2026-04-13 00:00:07.345080 | Merging changes 2026-04-13 00:00:07.345103 | Checking out repos 2026-04-13 00:00:08.083909 | Preparing playbooks 2026-04-13 00:00:09.294275 | Running Ansible setup 2026-04-13 00:00:18.187369 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-04-13 00:00:19.400190 | 2026-04-13 00:00:19.400334 | PLAY [Base pre] 2026-04-13 00:00:19.464646 | 2026-04-13 00:00:19.464769 | TASK [Setup log path fact] 2026-04-13 00:00:19.504816 | orchestrator | ok 2026-04-13 00:00:19.569026 | 2026-04-13 00:00:19.569178 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-13 00:00:19.709714 | orchestrator | ok 2026-04-13 00:00:19.762028 | 2026-04-13 00:00:19.762142 | TASK [emit-job-header : Print job information] 2026-04-13 00:00:19.875689 | # Job Information 2026-04-13 00:00:19.875861 | Ansible Version: 2.16.14 2026-04-13 00:00:19.875897 | Job: testbed-deploy-stable-in-a-nutshell-with-tempest-ubuntu-24.04 2026-04-13 00:00:19.875932 | Pipeline: periodic-midnight 2026-04-13 00:00:19.875956 | Executor: 521e9411259a 2026-04-13 00:00:19.875977 | Triggered by: https://github.com/osism/testbed 2026-04-13 00:00:19.875999 | Event ID: bd0f35ccae72404487dde80aa1dbe86f 2026-04-13 00:00:19.885340 | 2026-04-13 00:00:19.885446 | LOOP [emit-job-header : Print node information] 2026-04-13 00:00:20.160211 | orchestrator | ok: 2026-04-13 00:00:20.160370 | orchestrator | # Node Information 2026-04-13 00:00:20.160404 | orchestrator | Inventory Hostname: orchestrator 2026-04-13 00:00:20.160429 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-04-13 00:00:20.160451 | orchestrator | Username: zuul-testbed03 2026-04-13 00:00:20.160471 | orchestrator | Distro: Debian 12.13 2026-04-13 00:00:20.160494 | orchestrator | Provider: static-testbed 2026-04-13 00:00:20.162381 | orchestrator | Region: 2026-04-13 00:00:20.162455 | orchestrator | Label: testbed-orchestrator 2026-04-13 00:00:20.162484 | orchestrator | Product Name: OpenStack Nova 2026-04-13 00:00:20.162507 | orchestrator | Interface IP: 81.163.193.140 2026-04-13 00:00:20.188462 | 2026-04-13 00:00:20.188561 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-04-13 00:00:22.101425 | orchestrator -> localhost | changed 2026-04-13 00:00:22.110090 | 2026-04-13 00:00:22.110210 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-04-13 00:00:26.203057 | orchestrator -> localhost | changed 2026-04-13 00:00:26.214290 | 2026-04-13 00:00:26.214384 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-04-13 00:00:26.995242 | orchestrator -> localhost | ok 2026-04-13 00:00:27.002735 | 2026-04-13 00:00:27.002859 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-04-13 00:00:27.042610 | orchestrator | ok 2026-04-13 00:00:27.074035 | orchestrator | included: /var/lib/zuul/builds/f0c0073e0ad3480e915bcf487ee2e865/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-04-13 00:00:27.093532 | 2026-04-13 00:00:27.093638 | TASK [add-build-sshkey : Create Temp SSH key] 2026-04-13 00:00:29.778573 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-04-13 00:00:29.779793 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/f0c0073e0ad3480e915bcf487ee2e865/work/f0c0073e0ad3480e915bcf487ee2e865_id_rsa 2026-04-13 00:00:29.779859 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/f0c0073e0ad3480e915bcf487ee2e865/work/f0c0073e0ad3480e915bcf487ee2e865_id_rsa.pub 2026-04-13 00:00:29.779885 | orchestrator -> localhost | The key fingerprint is: 2026-04-13 00:00:29.779906 | orchestrator -> localhost | SHA256:gjpL65CvN2Ps8dAjUQf2vXnQSEKboK3YUvSlf12x8RE zuul-build-sshkey 2026-04-13 00:00:29.779925 | orchestrator -> localhost | The key's randomart image is: 2026-04-13 00:00:29.779952 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-04-13 00:00:29.779971 | orchestrator -> localhost | | . +.+ . o E. | 2026-04-13 00:00:29.779988 | orchestrator -> localhost | | . = * * o = . | 2026-04-13 00:00:29.780005 | orchestrator -> localhost | | o = = + . o . | 2026-04-13 00:00:29.780021 | orchestrator -> localhost | | + o + = . | 2026-04-13 00:00:29.780037 | orchestrator -> localhost | |o + . o S o | 2026-04-13 00:00:29.780057 | orchestrator -> localhost | | o + o . | 2026-04-13 00:00:29.780075 | orchestrator -> localhost | |o.B o | 2026-04-13 00:00:29.780091 | orchestrator -> localhost | | +*O . | 2026-04-13 00:00:29.780108 | orchestrator -> localhost | |.**o. | 2026-04-13 00:00:29.780125 | orchestrator -> localhost | +----[SHA256]-----+ 2026-04-13 00:00:29.780173 | orchestrator -> localhost | ok: Runtime: 0:00:01.230350 2026-04-13 00:00:29.786736 | 2026-04-13 00:00:29.786907 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-04-13 00:00:29.854476 | orchestrator | ok 2026-04-13 00:00:29.871427 | orchestrator | included: /var/lib/zuul/builds/f0c0073e0ad3480e915bcf487ee2e865/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-04-13 00:00:29.878867 | 2026-04-13 00:00:29.878953 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-04-13 00:00:29.917145 | orchestrator | skipping: Conditional result was False 2026-04-13 00:00:29.923722 | 2026-04-13 00:00:29.923808 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-04-13 00:00:30.718735 | orchestrator | changed 2026-04-13 00:00:30.723802 | 2026-04-13 00:00:30.723880 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-04-13 00:00:31.019566 | orchestrator | ok 2026-04-13 00:00:31.024779 | 2026-04-13 00:00:31.024867 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-04-13 00:00:31.459942 | orchestrator | ok 2026-04-13 00:00:31.472218 | 2026-04-13 00:00:31.472311 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-04-13 00:00:31.961102 | orchestrator | ok 2026-04-13 00:00:31.966073 | 2026-04-13 00:00:31.966151 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-04-13 00:00:31.988624 | orchestrator | skipping: Conditional result was False 2026-04-13 00:00:31.995092 | 2026-04-13 00:00:31.995178 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-04-13 00:00:33.001373 | orchestrator -> localhost | changed 2026-04-13 00:00:33.016425 | 2026-04-13 00:00:33.016550 | TASK [add-build-sshkey : Add back temp key] 2026-04-13 00:00:33.971242 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/f0c0073e0ad3480e915bcf487ee2e865/work/f0c0073e0ad3480e915bcf487ee2e865_id_rsa (zuul-build-sshkey) 2026-04-13 00:00:33.971432 | orchestrator -> localhost | ok: Runtime: 0:00:00.028703 2026-04-13 00:00:33.987900 | 2026-04-13 00:00:33.987995 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-04-13 00:00:34.434882 | orchestrator | ok 2026-04-13 00:00:34.441585 | 2026-04-13 00:00:34.441680 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-04-13 00:00:34.503083 | orchestrator | skipping: Conditional result was False 2026-04-13 00:00:34.556743 | 2026-04-13 00:00:34.556856 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-04-13 00:00:35.001562 | orchestrator | ok 2026-04-13 00:00:35.013115 | 2026-04-13 00:00:35.013235 | TASK [validate-host : Define zuul_info_dir fact] 2026-04-13 00:00:35.057876 | orchestrator | ok 2026-04-13 00:00:35.068525 | 2026-04-13 00:00:35.068622 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-04-13 00:00:35.929465 | orchestrator -> localhost | ok 2026-04-13 00:00:35.935361 | 2026-04-13 00:00:35.935447 | TASK [validate-host : Collect information about the host] 2026-04-13 00:00:37.406230 | orchestrator | ok 2026-04-13 00:00:37.426630 | 2026-04-13 00:00:37.426733 | TASK [validate-host : Sanitize hostname] 2026-04-13 00:00:37.519690 | orchestrator | ok 2026-04-13 00:00:37.524091 | 2026-04-13 00:00:37.524169 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-04-13 00:00:38.789237 | orchestrator -> localhost | changed 2026-04-13 00:00:38.794191 | 2026-04-13 00:00:38.794287 | TASK [validate-host : Collect information about zuul worker] 2026-04-13 00:00:39.556174 | orchestrator | ok 2026-04-13 00:00:39.560569 | 2026-04-13 00:00:39.560648 | TASK [validate-host : Write out all zuul information for each host] 2026-04-13 00:00:40.822332 | orchestrator -> localhost | changed 2026-04-13 00:00:40.831175 | 2026-04-13 00:00:40.831282 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-04-13 00:00:41.136867 | orchestrator | ok 2026-04-13 00:00:41.141864 | 2026-04-13 00:00:41.141947 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-04-13 00:02:04.718462 | orchestrator | changed: 2026-04-13 00:02:04.720122 | orchestrator | .d..t...... src/ 2026-04-13 00:02:04.720373 | orchestrator | .d..t...... src/github.com/ 2026-04-13 00:02:04.720466 | orchestrator | .d..t...... src/github.com/osism/ 2026-04-13 00:02:04.720537 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-04-13 00:02:04.720599 | orchestrator | RedHat.yml 2026-04-13 00:02:04.755747 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-04-13 00:02:04.755765 | orchestrator | RedHat.yml 2026-04-13 00:02:04.755817 | orchestrator | = 1.53.0"... 2026-04-13 00:02:18.104197 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-04-13 00:02:18.246696 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-04-13 00:02:18.779333 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-04-13 00:02:18.847017 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-04-13 00:02:19.689767 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-04-13 00:02:19.755878 | orchestrator | - Installing hashicorp/local v2.8.0... 2026-04-13 00:02:20.323522 | orchestrator | - Installed hashicorp/local v2.8.0 (signed, key ID 0C0AF313E5FD9F80) 2026-04-13 00:02:20.323630 | orchestrator | 2026-04-13 00:02:20.323646 | orchestrator | Providers are signed by their developers. 2026-04-13 00:02:20.323657 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-04-13 00:02:20.323667 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-04-13 00:02:20.323680 | orchestrator | 2026-04-13 00:02:20.323689 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-04-13 00:02:20.323698 | orchestrator | selections it made above. Include this file in your version control repository 2026-04-13 00:02:20.323727 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-04-13 00:02:20.323737 | orchestrator | you run "tofu init" in the future. 2026-04-13 00:02:20.323887 | orchestrator | 2026-04-13 00:02:20.323911 | orchestrator | OpenTofu has been successfully initialized! 2026-04-13 00:02:20.323920 | orchestrator | 2026-04-13 00:02:20.323929 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-04-13 00:02:20.323938 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-04-13 00:02:20.323947 | orchestrator | should now work. 2026-04-13 00:02:20.323955 | orchestrator | 2026-04-13 00:02:20.323964 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-04-13 00:02:20.323973 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-04-13 00:02:20.323982 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-04-13 00:02:20.494567 | orchestrator | Created and switched to workspace "ci"! 2026-04-13 00:02:20.494776 | orchestrator | 2026-04-13 00:02:20.494794 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-04-13 00:02:20.494806 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-04-13 00:02:20.494816 | orchestrator | for this configuration. 2026-04-13 00:02:21.250955 | orchestrator | ci.auto.tfvars 2026-04-13 00:02:21.338370 | orchestrator | default_custom.tf 2026-04-13 00:02:22.276965 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-04-13 00:02:22.880768 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-04-13 00:02:23.494114 | orchestrator | 2026-04-13 00:02:23.494173 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-04-13 00:02:23.494181 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-04-13 00:02:23.494186 | orchestrator | + create 2026-04-13 00:02:23.494191 | orchestrator | <= read (data resources) 2026-04-13 00:02:23.494196 | orchestrator | 2026-04-13 00:02:23.494200 | orchestrator | OpenTofu will perform the following actions: 2026-04-13 00:02:23.494204 | orchestrator | 2026-04-13 00:02:23.494208 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-04-13 00:02:23.494212 | orchestrator | # (config refers to values not yet known) 2026-04-13 00:02:23.494216 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-04-13 00:02:23.494220 | orchestrator | + checksum = (known after apply) 2026-04-13 00:02:23.494225 | orchestrator | + created_at = (known after apply) 2026-04-13 00:02:23.494229 | orchestrator | + file = (known after apply) 2026-04-13 00:02:23.494233 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.494252 | orchestrator | + metadata = (known after apply) 2026-04-13 00:02:23.494256 | orchestrator | + min_disk_gb = (known after apply) 2026-04-13 00:02:23.494260 | orchestrator | + min_ram_mb = (known after apply) 2026-04-13 00:02:23.494264 | orchestrator | + most_recent = true 2026-04-13 00:02:23.494268 | orchestrator | + name = (known after apply) 2026-04-13 00:02:23.494272 | orchestrator | + protected = (known after apply) 2026-04-13 00:02:23.494275 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.494282 | orchestrator | + schema = (known after apply) 2026-04-13 00:02:23.494286 | orchestrator | + size_bytes = (known after apply) 2026-04-13 00:02:23.494290 | orchestrator | + tags = (known after apply) 2026-04-13 00:02:23.494294 | orchestrator | + updated_at = (known after apply) 2026-04-13 00:02:23.494298 | orchestrator | } 2026-04-13 00:02:23.494302 | orchestrator | 2026-04-13 00:02:23.494306 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-04-13 00:02:23.494309 | orchestrator | # (config refers to values not yet known) 2026-04-13 00:02:23.494313 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-04-13 00:02:23.494317 | orchestrator | + checksum = (known after apply) 2026-04-13 00:02:23.494321 | orchestrator | + created_at = (known after apply) 2026-04-13 00:02:23.494325 | orchestrator | + file = (known after apply) 2026-04-13 00:02:23.494328 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.494332 | orchestrator | + metadata = (known after apply) 2026-04-13 00:02:23.494336 | orchestrator | + min_disk_gb = (known after apply) 2026-04-13 00:02:23.494339 | orchestrator | + min_ram_mb = (known after apply) 2026-04-13 00:02:23.494343 | orchestrator | + most_recent = true 2026-04-13 00:02:23.494347 | orchestrator | + name = (known after apply) 2026-04-13 00:02:23.494350 | orchestrator | + protected = (known after apply) 2026-04-13 00:02:23.494354 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.494358 | orchestrator | + schema = (known after apply) 2026-04-13 00:02:23.494361 | orchestrator | + size_bytes = (known after apply) 2026-04-13 00:02:23.494365 | orchestrator | + tags = (known after apply) 2026-04-13 00:02:23.494369 | orchestrator | + updated_at = (known after apply) 2026-04-13 00:02:23.494372 | orchestrator | } 2026-04-13 00:02:23.494376 | orchestrator | 2026-04-13 00:02:23.494380 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-04-13 00:02:23.494384 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-04-13 00:02:23.494387 | orchestrator | + content = (known after apply) 2026-04-13 00:02:23.494392 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-13 00:02:23.494395 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-13 00:02:23.494399 | orchestrator | + content_md5 = (known after apply) 2026-04-13 00:02:23.494403 | orchestrator | + content_sha1 = (known after apply) 2026-04-13 00:02:23.494406 | orchestrator | + content_sha256 = (known after apply) 2026-04-13 00:02:23.494410 | orchestrator | + content_sha512 = (known after apply) 2026-04-13 00:02:23.494414 | orchestrator | + directory_permission = "0777" 2026-04-13 00:02:23.494417 | orchestrator | + file_permission = "0644" 2026-04-13 00:02:23.494421 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-04-13 00:02:23.494425 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.494428 | orchestrator | } 2026-04-13 00:02:23.494432 | orchestrator | 2026-04-13 00:02:23.494436 | orchestrator | # local_file.id_rsa_pub will be created 2026-04-13 00:02:23.494440 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-04-13 00:02:23.494443 | orchestrator | + content = (known after apply) 2026-04-13 00:02:23.494447 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-13 00:02:23.494475 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-13 00:02:23.494479 | orchestrator | + content_md5 = (known after apply) 2026-04-13 00:02:23.494483 | orchestrator | + content_sha1 = (known after apply) 2026-04-13 00:02:23.494487 | orchestrator | + content_sha256 = (known after apply) 2026-04-13 00:02:23.494491 | orchestrator | + content_sha512 = (known after apply) 2026-04-13 00:02:23.494494 | orchestrator | + directory_permission = "0777" 2026-04-13 00:02:23.494498 | orchestrator | + file_permission = "0644" 2026-04-13 00:02:23.494506 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-04-13 00:02:23.494509 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.494513 | orchestrator | } 2026-04-13 00:02:23.494517 | orchestrator | 2026-04-13 00:02:23.494525 | orchestrator | # local_file.inventory will be created 2026-04-13 00:02:23.494529 | orchestrator | + resource "local_file" "inventory" { 2026-04-13 00:02:23.494533 | orchestrator | + content = (known after apply) 2026-04-13 00:02:23.494537 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-13 00:02:23.494540 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-13 00:02:23.494544 | orchestrator | + content_md5 = (known after apply) 2026-04-13 00:02:23.494548 | orchestrator | + content_sha1 = (known after apply) 2026-04-13 00:02:23.494552 | orchestrator | + content_sha256 = (known after apply) 2026-04-13 00:02:23.494556 | orchestrator | + content_sha512 = (known after apply) 2026-04-13 00:02:23.494559 | orchestrator | + directory_permission = "0777" 2026-04-13 00:02:23.494563 | orchestrator | + file_permission = "0644" 2026-04-13 00:02:23.494567 | orchestrator | + filename = "inventory.ci" 2026-04-13 00:02:23.494570 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.494574 | orchestrator | } 2026-04-13 00:02:23.494578 | orchestrator | 2026-04-13 00:02:23.494582 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-04-13 00:02:23.494586 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-04-13 00:02:23.494589 | orchestrator | + content = (sensitive value) 2026-04-13 00:02:23.494593 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-13 00:02:23.494597 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-13 00:02:23.494600 | orchestrator | + content_md5 = (known after apply) 2026-04-13 00:02:23.494604 | orchestrator | + content_sha1 = (known after apply) 2026-04-13 00:02:23.494608 | orchestrator | + content_sha256 = (known after apply) 2026-04-13 00:02:23.494621 | orchestrator | + content_sha512 = (known after apply) 2026-04-13 00:02:23.494625 | orchestrator | + directory_permission = "0700" 2026-04-13 00:02:23.494629 | orchestrator | + file_permission = "0600" 2026-04-13 00:02:23.494633 | orchestrator | + filename = ".id_rsa.ci" 2026-04-13 00:02:23.494639 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.494644 | orchestrator | } 2026-04-13 00:02:23.494650 | orchestrator | 2026-04-13 00:02:23.494656 | orchestrator | # null_resource.node_semaphore will be created 2026-04-13 00:02:23.494662 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-04-13 00:02:23.494668 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.494674 | orchestrator | } 2026-04-13 00:02:23.494692 | orchestrator | 2026-04-13 00:02:23.494697 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-04-13 00:02:23.494701 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-04-13 00:02:23.494705 | orchestrator | + attachment = (known after apply) 2026-04-13 00:02:23.494709 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.494712 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.494716 | orchestrator | + image_id = (known after apply) 2026-04-13 00:02:23.494720 | orchestrator | + metadata = (known after apply) 2026-04-13 00:02:23.494724 | orchestrator | + name = "testbed-volume-manager-base" 2026-04-13 00:02:23.494727 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.494731 | orchestrator | + size = 80 2026-04-13 00:02:23.494735 | orchestrator | + volume_retype_policy = "never" 2026-04-13 00:02:23.494738 | orchestrator | + volume_type = "ssd" 2026-04-13 00:02:23.494742 | orchestrator | } 2026-04-13 00:02:23.494746 | orchestrator | 2026-04-13 00:02:23.494749 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-04-13 00:02:23.494753 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-13 00:02:23.494757 | orchestrator | + attachment = (known after apply) 2026-04-13 00:02:23.494761 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.494765 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.494773 | orchestrator | + image_id = (known after apply) 2026-04-13 00:02:23.494777 | orchestrator | + metadata = (known after apply) 2026-04-13 00:02:23.494781 | orchestrator | + name = "testbed-volume-0-node-base" 2026-04-13 00:02:23.494785 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.494788 | orchestrator | + size = 80 2026-04-13 00:02:23.494792 | orchestrator | + volume_retype_policy = "never" 2026-04-13 00:02:23.494796 | orchestrator | + volume_type = "ssd" 2026-04-13 00:02:23.494799 | orchestrator | } 2026-04-13 00:02:23.494803 | orchestrator | 2026-04-13 00:02:23.494807 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-04-13 00:02:23.494811 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-13 00:02:23.494814 | orchestrator | + attachment = (known after apply) 2026-04-13 00:02:23.494818 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.494822 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.494825 | orchestrator | + image_id = (known after apply) 2026-04-13 00:02:23.494829 | orchestrator | + metadata = (known after apply) 2026-04-13 00:02:23.494833 | orchestrator | + name = "testbed-volume-1-node-base" 2026-04-13 00:02:23.494836 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.494840 | orchestrator | + size = 80 2026-04-13 00:02:23.494844 | orchestrator | + volume_retype_policy = "never" 2026-04-13 00:02:23.494848 | orchestrator | + volume_type = "ssd" 2026-04-13 00:02:23.494851 | orchestrator | } 2026-04-13 00:02:23.494855 | orchestrator | 2026-04-13 00:02:23.494859 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-04-13 00:02:23.494862 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-13 00:02:23.494866 | orchestrator | + attachment = (known after apply) 2026-04-13 00:02:23.494870 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.494873 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.494877 | orchestrator | + image_id = (known after apply) 2026-04-13 00:02:23.494881 | orchestrator | + metadata = (known after apply) 2026-04-13 00:02:23.494885 | orchestrator | + name = "testbed-volume-2-node-base" 2026-04-13 00:02:23.494888 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.494892 | orchestrator | + size = 80 2026-04-13 00:02:23.494896 | orchestrator | + volume_retype_policy = "never" 2026-04-13 00:02:23.494899 | orchestrator | + volume_type = "ssd" 2026-04-13 00:02:23.494903 | orchestrator | } 2026-04-13 00:02:23.494907 | orchestrator | 2026-04-13 00:02:23.494910 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-04-13 00:02:23.494914 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-13 00:02:23.494918 | orchestrator | + attachment = (known after apply) 2026-04-13 00:02:23.494921 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.494925 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.494929 | orchestrator | + image_id = (known after apply) 2026-04-13 00:02:23.494933 | orchestrator | + metadata = (known after apply) 2026-04-13 00:02:23.494939 | orchestrator | + name = "testbed-volume-3-node-base" 2026-04-13 00:02:23.494943 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.494946 | orchestrator | + size = 80 2026-04-13 00:02:23.494950 | orchestrator | + volume_retype_policy = "never" 2026-04-13 00:02:23.494954 | orchestrator | + volume_type = "ssd" 2026-04-13 00:02:23.494958 | orchestrator | } 2026-04-13 00:02:23.494961 | orchestrator | 2026-04-13 00:02:23.494965 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-04-13 00:02:23.494969 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-13 00:02:23.494973 | orchestrator | + attachment = (known after apply) 2026-04-13 00:02:23.494976 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.494980 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.494987 | orchestrator | + image_id = (known after apply) 2026-04-13 00:02:23.494991 | orchestrator | + metadata = (known after apply) 2026-04-13 00:02:23.494995 | orchestrator | + name = "testbed-volume-4-node-base" 2026-04-13 00:02:23.494998 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.495002 | orchestrator | + size = 80 2026-04-13 00:02:23.495006 | orchestrator | + volume_retype_policy = "never" 2026-04-13 00:02:23.495010 | orchestrator | + volume_type = "ssd" 2026-04-13 00:02:23.495013 | orchestrator | } 2026-04-13 00:02:23.495017 | orchestrator | 2026-04-13 00:02:23.495021 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-04-13 00:02:23.495028 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-13 00:02:23.495032 | orchestrator | + attachment = (known after apply) 2026-04-13 00:02:23.495036 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.495040 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.495043 | orchestrator | + image_id = (known after apply) 2026-04-13 00:02:23.495047 | orchestrator | + metadata = (known after apply) 2026-04-13 00:02:23.495051 | orchestrator | + name = "testbed-volume-5-node-base" 2026-04-13 00:02:23.495055 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.495058 | orchestrator | + size = 80 2026-04-13 00:02:23.495062 | orchestrator | + volume_retype_policy = "never" 2026-04-13 00:02:23.495066 | orchestrator | + volume_type = "ssd" 2026-04-13 00:02:23.495070 | orchestrator | } 2026-04-13 00:02:23.495073 | orchestrator | 2026-04-13 00:02:23.495077 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-04-13 00:02:23.495081 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-13 00:02:23.495085 | orchestrator | + attachment = (known after apply) 2026-04-13 00:02:23.495088 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.495092 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.495096 | orchestrator | + metadata = (known after apply) 2026-04-13 00:02:23.495100 | orchestrator | + name = "testbed-volume-0-node-3" 2026-04-13 00:02:23.495103 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.495107 | orchestrator | + size = 20 2026-04-13 00:02:23.495111 | orchestrator | + volume_retype_policy = "never" 2026-04-13 00:02:23.495115 | orchestrator | + volume_type = "ssd" 2026-04-13 00:02:23.495118 | orchestrator | } 2026-04-13 00:02:23.495122 | orchestrator | 2026-04-13 00:02:23.495126 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-04-13 00:02:23.495129 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-13 00:02:23.495133 | orchestrator | + attachment = (known after apply) 2026-04-13 00:02:23.495137 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.495140 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.495144 | orchestrator | + metadata = (known after apply) 2026-04-13 00:02:23.495148 | orchestrator | + name = "testbed-volume-1-node-4" 2026-04-13 00:02:23.495152 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.495155 | orchestrator | + size = 20 2026-04-13 00:02:23.495159 | orchestrator | + volume_retype_policy = "never" 2026-04-13 00:02:23.495163 | orchestrator | + volume_type = "ssd" 2026-04-13 00:02:23.495166 | orchestrator | } 2026-04-13 00:02:23.495170 | orchestrator | 2026-04-13 00:02:23.495174 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-04-13 00:02:23.495178 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-13 00:02:23.495181 | orchestrator | + attachment = (known after apply) 2026-04-13 00:02:23.495185 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.495189 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.495193 | orchestrator | + metadata = (known after apply) 2026-04-13 00:02:23.495196 | orchestrator | + name = "testbed-volume-2-node-5" 2026-04-13 00:02:23.495200 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.495212 | orchestrator | + size = 20 2026-04-13 00:02:23.495216 | orchestrator | + volume_retype_policy = "never" 2026-04-13 00:02:23.495220 | orchestrator | + volume_type = "ssd" 2026-04-13 00:02:23.495224 | orchestrator | } 2026-04-13 00:02:23.495227 | orchestrator | 2026-04-13 00:02:23.495231 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-04-13 00:02:23.495235 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-13 00:02:23.495238 | orchestrator | + attachment = (known after apply) 2026-04-13 00:02:23.495242 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.495246 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.495250 | orchestrator | + metadata = (known after apply) 2026-04-13 00:02:23.495253 | orchestrator | + name = "testbed-volume-3-node-3" 2026-04-13 00:02:23.495257 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.495261 | orchestrator | + size = 20 2026-04-13 00:02:23.495265 | orchestrator | + volume_retype_policy = "never" 2026-04-13 00:02:23.495268 | orchestrator | + volume_type = "ssd" 2026-04-13 00:02:23.495272 | orchestrator | } 2026-04-13 00:02:23.495276 | orchestrator | 2026-04-13 00:02:23.495279 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-04-13 00:02:23.495283 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-13 00:02:23.495287 | orchestrator | + attachment = (known after apply) 2026-04-13 00:02:23.495291 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.495294 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.495298 | orchestrator | + metadata = (known after apply) 2026-04-13 00:02:23.495302 | orchestrator | + name = "testbed-volume-4-node-4" 2026-04-13 00:02:23.495305 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.495312 | orchestrator | + size = 20 2026-04-13 00:02:23.495316 | orchestrator | + volume_retype_policy = "never" 2026-04-13 00:02:23.495319 | orchestrator | + volume_type = "ssd" 2026-04-13 00:02:23.495323 | orchestrator | } 2026-04-13 00:02:23.495327 | orchestrator | 2026-04-13 00:02:23.495331 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-04-13 00:02:23.495334 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-13 00:02:23.495338 | orchestrator | + attachment = (known after apply) 2026-04-13 00:02:23.495342 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.495345 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.495349 | orchestrator | + metadata = (known after apply) 2026-04-13 00:02:23.495353 | orchestrator | + name = "testbed-volume-5-node-5" 2026-04-13 00:02:23.495356 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.495360 | orchestrator | + size = 20 2026-04-13 00:02:23.495364 | orchestrator | + volume_retype_policy = "never" 2026-04-13 00:02:23.495368 | orchestrator | + volume_type = "ssd" 2026-04-13 00:02:23.495371 | orchestrator | } 2026-04-13 00:02:23.495375 | orchestrator | 2026-04-13 00:02:23.495379 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-04-13 00:02:23.495383 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-13 00:02:23.495386 | orchestrator | + attachment = (known after apply) 2026-04-13 00:02:23.495390 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.495394 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.495401 | orchestrator | + metadata = (known after apply) 2026-04-13 00:02:23.495405 | orchestrator | + name = "testbed-volume-6-node-3" 2026-04-13 00:02:23.495409 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.495413 | orchestrator | + size = 20 2026-04-13 00:02:23.495416 | orchestrator | + volume_retype_policy = "never" 2026-04-13 00:02:23.495420 | orchestrator | + volume_type = "ssd" 2026-04-13 00:02:23.495424 | orchestrator | } 2026-04-13 00:02:23.495428 | orchestrator | 2026-04-13 00:02:23.495432 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-04-13 00:02:23.495436 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-13 00:02:23.495443 | orchestrator | + attachment = (known after apply) 2026-04-13 00:02:23.495447 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.495463 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.495469 | orchestrator | + metadata = (known after apply) 2026-04-13 00:02:23.495476 | orchestrator | + name = "testbed-volume-7-node-4" 2026-04-13 00:02:23.495482 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.495489 | orchestrator | + size = 20 2026-04-13 00:02:23.495494 | orchestrator | + volume_retype_policy = "never" 2026-04-13 00:02:23.495500 | orchestrator | + volume_type = "ssd" 2026-04-13 00:02:23.495506 | orchestrator | } 2026-04-13 00:02:23.495512 | orchestrator | 2026-04-13 00:02:23.495516 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-04-13 00:02:23.495521 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-13 00:02:23.495525 | orchestrator | + attachment = (known after apply) 2026-04-13 00:02:23.495529 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.495533 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.495537 | orchestrator | + metadata = (known after apply) 2026-04-13 00:02:23.495540 | orchestrator | + name = "testbed-volume-8-node-5" 2026-04-13 00:02:23.495544 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.495548 | orchestrator | + size = 20 2026-04-13 00:02:23.495551 | orchestrator | + volume_retype_policy = "never" 2026-04-13 00:02:23.495555 | orchestrator | + volume_type = "ssd" 2026-04-13 00:02:23.495559 | orchestrator | } 2026-04-13 00:02:23.495562 | orchestrator | 2026-04-13 00:02:23.495566 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-04-13 00:02:23.495569 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-04-13 00:02:23.495573 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-13 00:02:23.495577 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-13 00:02:23.495581 | orchestrator | + all_metadata = (known after apply) 2026-04-13 00:02:23.495584 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.495588 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.495592 | orchestrator | + config_drive = true 2026-04-13 00:02:23.495596 | orchestrator | + created = (known after apply) 2026-04-13 00:02:23.495599 | orchestrator | + flavor_id = (known after apply) 2026-04-13 00:02:23.495603 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-04-13 00:02:23.495607 | orchestrator | + force_delete = false 2026-04-13 00:02:23.495610 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-13 00:02:23.495614 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.495618 | orchestrator | + image_id = (known after apply) 2026-04-13 00:02:23.495621 | orchestrator | + image_name = (known after apply) 2026-04-13 00:02:23.495625 | orchestrator | + key_pair = "testbed" 2026-04-13 00:02:23.495629 | orchestrator | + name = "testbed-manager" 2026-04-13 00:02:23.495632 | orchestrator | + power_state = "active" 2026-04-13 00:02:23.495636 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.495640 | orchestrator | + security_groups = (known after apply) 2026-04-13 00:02:23.495644 | orchestrator | + stop_before_destroy = false 2026-04-13 00:02:23.495647 | orchestrator | + updated = (known after apply) 2026-04-13 00:02:23.495651 | orchestrator | + user_data = (sensitive value) 2026-04-13 00:02:23.495655 | orchestrator | 2026-04-13 00:02:23.495659 | orchestrator | + block_device { 2026-04-13 00:02:23.495662 | orchestrator | + boot_index = 0 2026-04-13 00:02:23.495666 | orchestrator | + delete_on_termination = false 2026-04-13 00:02:23.495673 | orchestrator | + destination_type = "volume" 2026-04-13 00:02:23.495678 | orchestrator | + multiattach = false 2026-04-13 00:02:23.495681 | orchestrator | + source_type = "volume" 2026-04-13 00:02:23.495685 | orchestrator | + uuid = (known after apply) 2026-04-13 00:02:23.495693 | orchestrator | } 2026-04-13 00:02:23.495696 | orchestrator | 2026-04-13 00:02:23.495700 | orchestrator | + network { 2026-04-13 00:02:23.495704 | orchestrator | + access_network = false 2026-04-13 00:02:23.495708 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-13 00:02:23.495711 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-13 00:02:23.495715 | orchestrator | + mac = (known after apply) 2026-04-13 00:02:23.495719 | orchestrator | + name = (known after apply) 2026-04-13 00:02:23.495723 | orchestrator | + port = (known after apply) 2026-04-13 00:02:23.495726 | orchestrator | + uuid = (known after apply) 2026-04-13 00:02:23.495730 | orchestrator | } 2026-04-13 00:02:23.495734 | orchestrator | } 2026-04-13 00:02:23.495738 | orchestrator | 2026-04-13 00:02:23.495741 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-04-13 00:02:23.495745 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-13 00:02:23.495749 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-13 00:02:23.495752 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-13 00:02:23.495756 | orchestrator | + all_metadata = (known after apply) 2026-04-13 00:02:23.495760 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.495764 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.495767 | orchestrator | + config_drive = true 2026-04-13 00:02:23.495771 | orchestrator | + created = (known after apply) 2026-04-13 00:02:23.495775 | orchestrator | + flavor_id = (known after apply) 2026-04-13 00:02:23.495778 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-13 00:02:23.495782 | orchestrator | + force_delete = false 2026-04-13 00:02:23.495786 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-13 00:02:23.495789 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.495793 | orchestrator | + image_id = (known after apply) 2026-04-13 00:02:23.495797 | orchestrator | + image_name = (known after apply) 2026-04-13 00:02:23.495801 | orchestrator | + key_pair = "testbed" 2026-04-13 00:02:23.495804 | orchestrator | + name = "testbed-node-0" 2026-04-13 00:02:23.495808 | orchestrator | + power_state = "active" 2026-04-13 00:02:23.495815 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.495819 | orchestrator | + security_groups = (known after apply) 2026-04-13 00:02:23.495822 | orchestrator | + stop_before_destroy = false 2026-04-13 00:02:23.495826 | orchestrator | + updated = (known after apply) 2026-04-13 00:02:23.495830 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-13 00:02:23.495834 | orchestrator | 2026-04-13 00:02:23.495838 | orchestrator | + block_device { 2026-04-13 00:02:23.495841 | orchestrator | + boot_index = 0 2026-04-13 00:02:23.495845 | orchestrator | + delete_on_termination = false 2026-04-13 00:02:23.495849 | orchestrator | + destination_type = "volume" 2026-04-13 00:02:23.495852 | orchestrator | + multiattach = false 2026-04-13 00:02:23.495856 | orchestrator | + source_type = "volume" 2026-04-13 00:02:23.495860 | orchestrator | + uuid = (known after apply) 2026-04-13 00:02:23.495864 | orchestrator | } 2026-04-13 00:02:23.495867 | orchestrator | 2026-04-13 00:02:23.495871 | orchestrator | + network { 2026-04-13 00:02:23.495875 | orchestrator | + access_network = false 2026-04-13 00:02:23.495878 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-13 00:02:23.495882 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-13 00:02:23.495886 | orchestrator | + mac = (known after apply) 2026-04-13 00:02:23.495890 | orchestrator | + name = (known after apply) 2026-04-13 00:02:23.495893 | orchestrator | + port = (known after apply) 2026-04-13 00:02:23.495897 | orchestrator | + uuid = (known after apply) 2026-04-13 00:02:23.495901 | orchestrator | } 2026-04-13 00:02:23.495905 | orchestrator | } 2026-04-13 00:02:23.495909 | orchestrator | 2026-04-13 00:02:23.495913 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-04-13 00:02:23.495916 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-13 00:02:23.495920 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-13 00:02:23.495927 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-13 00:02:23.495931 | orchestrator | + all_metadata = (known after apply) 2026-04-13 00:02:23.495935 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.495939 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.495942 | orchestrator | + config_drive = true 2026-04-13 00:02:23.495946 | orchestrator | + created = (known after apply) 2026-04-13 00:02:23.495950 | orchestrator | + flavor_id = (known after apply) 2026-04-13 00:02:23.495953 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-13 00:02:23.495957 | orchestrator | + force_delete = false 2026-04-13 00:02:23.495960 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-13 00:02:23.495964 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.495968 | orchestrator | + image_id = (known after apply) 2026-04-13 00:02:23.495972 | orchestrator | + image_name = (known after apply) 2026-04-13 00:02:23.495975 | orchestrator | + key_pair = "testbed" 2026-04-13 00:02:23.495979 | orchestrator | + name = "testbed-node-1" 2026-04-13 00:02:23.495983 | orchestrator | + power_state = "active" 2026-04-13 00:02:23.495986 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.495990 | orchestrator | + security_groups = (known after apply) 2026-04-13 00:02:23.495994 | orchestrator | + stop_before_destroy = false 2026-04-13 00:02:23.495998 | orchestrator | + updated = (known after apply) 2026-04-13 00:02:23.496001 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-13 00:02:23.496005 | orchestrator | 2026-04-13 00:02:23.496009 | orchestrator | + block_device { 2026-04-13 00:02:23.496013 | orchestrator | + boot_index = 0 2026-04-13 00:02:23.496016 | orchestrator | + delete_on_termination = false 2026-04-13 00:02:23.496020 | orchestrator | + destination_type = "volume" 2026-04-13 00:02:23.496024 | orchestrator | + multiattach = false 2026-04-13 00:02:23.496027 | orchestrator | + source_type = "volume" 2026-04-13 00:02:23.496031 | orchestrator | + uuid = (known after apply) 2026-04-13 00:02:23.496035 | orchestrator | } 2026-04-13 00:02:23.496038 | orchestrator | 2026-04-13 00:02:23.496042 | orchestrator | + network { 2026-04-13 00:02:23.496046 | orchestrator | + access_network = false 2026-04-13 00:02:23.496050 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-13 00:02:23.496053 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-13 00:02:23.496057 | orchestrator | + mac = (known after apply) 2026-04-13 00:02:23.496061 | orchestrator | + name = (known after apply) 2026-04-13 00:02:23.496064 | orchestrator | + port = (known after apply) 2026-04-13 00:02:23.496068 | orchestrator | + uuid = (known after apply) 2026-04-13 00:02:23.496072 | orchestrator | } 2026-04-13 00:02:23.496075 | orchestrator | } 2026-04-13 00:02:23.496079 | orchestrator | 2026-04-13 00:02:23.496083 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-04-13 00:02:23.496087 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-13 00:02:23.496090 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-13 00:02:23.496095 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-13 00:02:23.496099 | orchestrator | + all_metadata = (known after apply) 2026-04-13 00:02:23.496102 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.496108 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.496112 | orchestrator | + config_drive = true 2026-04-13 00:02:23.496116 | orchestrator | + created = (known after apply) 2026-04-13 00:02:23.496120 | orchestrator | + flavor_id = (known after apply) 2026-04-13 00:02:23.496123 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-13 00:02:23.496127 | orchestrator | + force_delete = false 2026-04-13 00:02:23.496131 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-13 00:02:23.496135 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.496138 | orchestrator | + image_id = (known after apply) 2026-04-13 00:02:23.496146 | orchestrator | + image_name = (known after apply) 2026-04-13 00:02:23.496149 | orchestrator | + key_pair = "testbed" 2026-04-13 00:02:23.496153 | orchestrator | + name = "testbed-node-2" 2026-04-13 00:02:23.496157 | orchestrator | + power_state = "active" 2026-04-13 00:02:23.496160 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.496164 | orchestrator | + security_groups = (known after apply) 2026-04-13 00:02:23.496168 | orchestrator | + stop_before_destroy = false 2026-04-13 00:02:23.496172 | orchestrator | + updated = (known after apply) 2026-04-13 00:02:23.496175 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-13 00:02:23.496179 | orchestrator | 2026-04-13 00:02:23.496183 | orchestrator | + block_device { 2026-04-13 00:02:23.496186 | orchestrator | + boot_index = 0 2026-04-13 00:02:23.496190 | orchestrator | + delete_on_termination = false 2026-04-13 00:02:23.496194 | orchestrator | + destination_type = "volume" 2026-04-13 00:02:23.496200 | orchestrator | + multiattach = false 2026-04-13 00:02:23.496204 | orchestrator | + source_type = "volume" 2026-04-13 00:02:23.496208 | orchestrator | + uuid = (known after apply) 2026-04-13 00:02:23.496211 | orchestrator | } 2026-04-13 00:02:23.496215 | orchestrator | 2026-04-13 00:02:23.496219 | orchestrator | + network { 2026-04-13 00:02:23.496223 | orchestrator | + access_network = false 2026-04-13 00:02:23.496226 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-13 00:02:23.496230 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-13 00:02:23.496234 | orchestrator | + mac = (known after apply) 2026-04-13 00:02:23.496237 | orchestrator | + name = (known after apply) 2026-04-13 00:02:23.496241 | orchestrator | + port = (known after apply) 2026-04-13 00:02:23.496245 | orchestrator | + uuid = (known after apply) 2026-04-13 00:02:23.496248 | orchestrator | } 2026-04-13 00:02:23.496252 | orchestrator | } 2026-04-13 00:02:23.496256 | orchestrator | 2026-04-13 00:02:23.496259 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-04-13 00:02:23.496263 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-13 00:02:23.496267 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-13 00:02:23.496270 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-13 00:02:23.496274 | orchestrator | + all_metadata = (known after apply) 2026-04-13 00:02:23.496278 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.496281 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.496285 | orchestrator | + config_drive = true 2026-04-13 00:02:23.496289 | orchestrator | + created = (known after apply) 2026-04-13 00:02:23.496292 | orchestrator | + flavor_id = (known after apply) 2026-04-13 00:02:23.496296 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-13 00:02:23.496300 | orchestrator | + force_delete = false 2026-04-13 00:02:23.496303 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-13 00:02:23.496307 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.496311 | orchestrator | + image_id = (known after apply) 2026-04-13 00:02:23.496315 | orchestrator | + image_name = (known after apply) 2026-04-13 00:02:23.496318 | orchestrator | + key_pair = "testbed" 2026-04-13 00:02:23.496322 | orchestrator | + name = "testbed-node-3" 2026-04-13 00:02:23.496326 | orchestrator | + power_state = "active" 2026-04-13 00:02:23.496330 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.496333 | orchestrator | + security_groups = (known after apply) 2026-04-13 00:02:23.496337 | orchestrator | + stop_before_destroy = false 2026-04-13 00:02:23.496341 | orchestrator | + updated = (known after apply) 2026-04-13 00:02:23.496344 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-13 00:02:23.496348 | orchestrator | 2026-04-13 00:02:23.496352 | orchestrator | + block_device { 2026-04-13 00:02:23.496359 | orchestrator | + boot_index = 0 2026-04-13 00:02:23.496363 | orchestrator | + delete_on_termination = false 2026-04-13 00:02:23.496366 | orchestrator | + destination_type = "volume" 2026-04-13 00:02:23.496373 | orchestrator | + multiattach = false 2026-04-13 00:02:23.496377 | orchestrator | + source_type = "volume" 2026-04-13 00:02:23.496381 | orchestrator | + uuid = (known after apply) 2026-04-13 00:02:23.496384 | orchestrator | } 2026-04-13 00:02:23.496388 | orchestrator | 2026-04-13 00:02:23.496392 | orchestrator | + network { 2026-04-13 00:02:23.496395 | orchestrator | + access_network = false 2026-04-13 00:02:23.496399 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-13 00:02:23.496403 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-13 00:02:23.496406 | orchestrator | + mac = (known after apply) 2026-04-13 00:02:23.496410 | orchestrator | + name = (known after apply) 2026-04-13 00:02:23.496414 | orchestrator | + port = (known after apply) 2026-04-13 00:02:23.496417 | orchestrator | + uuid = (known after apply) 2026-04-13 00:02:23.496421 | orchestrator | } 2026-04-13 00:02:23.496425 | orchestrator | } 2026-04-13 00:02:23.496429 | orchestrator | 2026-04-13 00:02:23.496432 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-04-13 00:02:23.496436 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-13 00:02:23.496440 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-13 00:02:23.496444 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-13 00:02:23.496447 | orchestrator | + all_metadata = (known after apply) 2026-04-13 00:02:23.496465 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.496469 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.496473 | orchestrator | + config_drive = true 2026-04-13 00:02:23.496476 | orchestrator | + created = (known after apply) 2026-04-13 00:02:23.496480 | orchestrator | + flavor_id = (known after apply) 2026-04-13 00:02:23.496484 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-13 00:02:23.496487 | orchestrator | + force_delete = false 2026-04-13 00:02:23.496491 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-13 00:02:23.496495 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.496499 | orchestrator | + image_id = (known after apply) 2026-04-13 00:02:23.496502 | orchestrator | + image_name = (known after apply) 2026-04-13 00:02:23.496506 | orchestrator | + key_pair = "testbed" 2026-04-13 00:02:23.496510 | orchestrator | + name = "testbed-node-4" 2026-04-13 00:02:23.496513 | orchestrator | + power_state = "active" 2026-04-13 00:02:23.496517 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.496521 | orchestrator | + security_groups = (known after apply) 2026-04-13 00:02:23.496525 | orchestrator | + stop_before_destroy = false 2026-04-13 00:02:23.496529 | orchestrator | + updated = (known after apply) 2026-04-13 00:02:23.496533 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-13 00:02:23.496536 | orchestrator | 2026-04-13 00:02:23.496540 | orchestrator | + block_device { 2026-04-13 00:02:23.496544 | orchestrator | + boot_index = 0 2026-04-13 00:02:23.496548 | orchestrator | + delete_on_termination = false 2026-04-13 00:02:23.496551 | orchestrator | + destination_type = "volume" 2026-04-13 00:02:23.496555 | orchestrator | + multiattach = false 2026-04-13 00:02:23.496559 | orchestrator | + source_type = "volume" 2026-04-13 00:02:23.496562 | orchestrator | + uuid = (known after apply) 2026-04-13 00:02:23.496566 | orchestrator | } 2026-04-13 00:02:23.496570 | orchestrator | 2026-04-13 00:02:23.496573 | orchestrator | + network { 2026-04-13 00:02:23.496577 | orchestrator | + access_network = false 2026-04-13 00:02:23.496581 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-13 00:02:23.496584 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-13 00:02:23.496588 | orchestrator | + mac = (known after apply) 2026-04-13 00:02:23.496592 | orchestrator | + name = (known after apply) 2026-04-13 00:02:23.496596 | orchestrator | + port = (known after apply) 2026-04-13 00:02:23.496603 | orchestrator | + uuid = (known after apply) 2026-04-13 00:02:23.496607 | orchestrator | } 2026-04-13 00:02:23.496610 | orchestrator | } 2026-04-13 00:02:23.496618 | orchestrator | 2026-04-13 00:02:23.496622 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-04-13 00:02:23.496625 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-13 00:02:23.496629 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-13 00:02:23.496633 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-13 00:02:23.496637 | orchestrator | + all_metadata = (known after apply) 2026-04-13 00:02:23.496640 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.496644 | orchestrator | + availability_zone = "nova" 2026-04-13 00:02:23.496648 | orchestrator | + config_drive = true 2026-04-13 00:02:23.496651 | orchestrator | + created = (known after apply) 2026-04-13 00:02:23.496655 | orchestrator | + flavor_id = (known after apply) 2026-04-13 00:02:23.496659 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-13 00:02:23.496663 | orchestrator | + force_delete = false 2026-04-13 00:02:23.496669 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-13 00:02:23.496673 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.496677 | orchestrator | + image_id = (known after apply) 2026-04-13 00:02:23.496681 | orchestrator | + image_name = (known after apply) 2026-04-13 00:02:23.496684 | orchestrator | + key_pair = "testbed" 2026-04-13 00:02:23.496688 | orchestrator | + name = "testbed-node-5" 2026-04-13 00:02:23.496692 | orchestrator | + power_state = "active" 2026-04-13 00:02:23.496695 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.496699 | orchestrator | + security_groups = (known after apply) 2026-04-13 00:02:23.496703 | orchestrator | + stop_before_destroy = false 2026-04-13 00:02:23.496707 | orchestrator | + updated = (known after apply) 2026-04-13 00:02:23.496710 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-13 00:02:23.496714 | orchestrator | 2026-04-13 00:02:23.496718 | orchestrator | + block_device { 2026-04-13 00:02:23.496721 | orchestrator | + boot_index = 0 2026-04-13 00:02:23.496725 | orchestrator | + delete_on_termination = false 2026-04-13 00:02:23.496729 | orchestrator | + destination_type = "volume" 2026-04-13 00:02:23.496733 | orchestrator | + multiattach = false 2026-04-13 00:02:23.496736 | orchestrator | + source_type = "volume" 2026-04-13 00:02:23.496740 | orchestrator | + uuid = (known after apply) 2026-04-13 00:02:23.496744 | orchestrator | } 2026-04-13 00:02:23.496747 | orchestrator | 2026-04-13 00:02:23.496751 | orchestrator | + network { 2026-04-13 00:02:23.496755 | orchestrator | + access_network = false 2026-04-13 00:02:23.496758 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-13 00:02:23.496762 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-13 00:02:23.496766 | orchestrator | + mac = (known after apply) 2026-04-13 00:02:23.496770 | orchestrator | + name = (known after apply) 2026-04-13 00:02:23.496773 | orchestrator | + port = (known after apply) 2026-04-13 00:02:23.496777 | orchestrator | + uuid = (known after apply) 2026-04-13 00:02:23.496781 | orchestrator | } 2026-04-13 00:02:23.496785 | orchestrator | } 2026-04-13 00:02:23.496788 | orchestrator | 2026-04-13 00:02:23.496792 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-04-13 00:02:23.496796 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-04-13 00:02:23.496799 | orchestrator | + fingerprint = (known after apply) 2026-04-13 00:02:23.496803 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.496807 | orchestrator | + name = "testbed" 2026-04-13 00:02:23.496810 | orchestrator | + private_key = (sensitive value) 2026-04-13 00:02:23.496814 | orchestrator | + public_key = (known after apply) 2026-04-13 00:02:23.496818 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.496821 | orchestrator | + user_id = (known after apply) 2026-04-13 00:02:23.496825 | orchestrator | } 2026-04-13 00:02:23.496829 | orchestrator | 2026-04-13 00:02:23.496833 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-04-13 00:02:23.496836 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-13 00:02:23.496843 | orchestrator | + device = (known after apply) 2026-04-13 00:02:23.496847 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.496851 | orchestrator | + instance_id = (known after apply) 2026-04-13 00:02:23.496854 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.496858 | orchestrator | + volume_id = (known after apply) 2026-04-13 00:02:23.496862 | orchestrator | } 2026-04-13 00:02:23.496865 | orchestrator | 2026-04-13 00:02:23.496869 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-04-13 00:02:23.496873 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-13 00:02:23.496877 | orchestrator | + device = (known after apply) 2026-04-13 00:02:23.496880 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.496884 | orchestrator | + instance_id = (known after apply) 2026-04-13 00:02:23.496888 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.496891 | orchestrator | + volume_id = (known after apply) 2026-04-13 00:02:23.496895 | orchestrator | } 2026-04-13 00:02:23.496899 | orchestrator | 2026-04-13 00:02:23.496902 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-04-13 00:02:23.496906 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-13 00:02:23.496910 | orchestrator | + device = (known after apply) 2026-04-13 00:02:23.496913 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.496917 | orchestrator | + instance_id = (known after apply) 2026-04-13 00:02:23.496921 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.496925 | orchestrator | + volume_id = (known after apply) 2026-04-13 00:02:23.496928 | orchestrator | } 2026-04-13 00:02:23.496932 | orchestrator | 2026-04-13 00:02:23.496938 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-04-13 00:02:23.496941 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-13 00:02:23.496945 | orchestrator | + device = (known after apply) 2026-04-13 00:02:23.496949 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.496952 | orchestrator | + instance_id = (known after apply) 2026-04-13 00:02:23.496956 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.496960 | orchestrator | + volume_id = (known after apply) 2026-04-13 00:02:23.496964 | orchestrator | } 2026-04-13 00:02:23.496969 | orchestrator | 2026-04-13 00:02:23.496973 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-04-13 00:02:23.496977 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-13 00:02:23.496980 | orchestrator | + device = (known after apply) 2026-04-13 00:02:23.496984 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.496988 | orchestrator | + instance_id = (known after apply) 2026-04-13 00:02:23.496994 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.496998 | orchestrator | + volume_id = (known after apply) 2026-04-13 00:02:23.497001 | orchestrator | } 2026-04-13 00:02:23.497006 | orchestrator | 2026-04-13 00:02:23.497010 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-04-13 00:02:23.497014 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-13 00:02:23.497018 | orchestrator | + device = (known after apply) 2026-04-13 00:02:23.497022 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.497026 | orchestrator | + instance_id = (known after apply) 2026-04-13 00:02:23.497029 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.497033 | orchestrator | + volume_id = (known after apply) 2026-04-13 00:02:23.497037 | orchestrator | } 2026-04-13 00:02:23.497042 | orchestrator | 2026-04-13 00:02:23.497046 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-04-13 00:02:23.497050 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-13 00:02:23.497054 | orchestrator | + device = (known after apply) 2026-04-13 00:02:23.497057 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.497061 | orchestrator | + instance_id = (known after apply) 2026-04-13 00:02:23.497065 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.497071 | orchestrator | + volume_id = (known after apply) 2026-04-13 00:02:23.497075 | orchestrator | } 2026-04-13 00:02:23.497079 | orchestrator | 2026-04-13 00:02:23.497083 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-04-13 00:02:23.497086 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-13 00:02:23.497090 | orchestrator | + device = (known after apply) 2026-04-13 00:02:23.497094 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.497098 | orchestrator | + instance_id = (known after apply) 2026-04-13 00:02:23.497101 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.497105 | orchestrator | + volume_id = (known after apply) 2026-04-13 00:02:23.497109 | orchestrator | } 2026-04-13 00:02:23.497114 | orchestrator | 2026-04-13 00:02:23.497118 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-04-13 00:02:23.497122 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-13 00:02:23.497125 | orchestrator | + device = (known after apply) 2026-04-13 00:02:23.497129 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.497133 | orchestrator | + instance_id = (known after apply) 2026-04-13 00:02:23.497137 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.497140 | orchestrator | + volume_id = (known after apply) 2026-04-13 00:02:23.497144 | orchestrator | } 2026-04-13 00:02:23.497148 | orchestrator | 2026-04-13 00:02:23.497151 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-04-13 00:02:23.497156 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-04-13 00:02:23.497160 | orchestrator | + fixed_ip = (known after apply) 2026-04-13 00:02:23.497163 | orchestrator | + floating_ip = (known after apply) 2026-04-13 00:02:23.497167 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.497171 | orchestrator | + port_id = (known after apply) 2026-04-13 00:02:23.497175 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.497178 | orchestrator | } 2026-04-13 00:02:23.497183 | orchestrator | 2026-04-13 00:02:23.497187 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-04-13 00:02:23.497191 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-04-13 00:02:23.497195 | orchestrator | + address = (known after apply) 2026-04-13 00:02:23.497198 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.497202 | orchestrator | + dns_domain = (known after apply) 2026-04-13 00:02:23.497206 | orchestrator | + dns_name = (known after apply) 2026-04-13 00:02:23.497209 | orchestrator | + fixed_ip = (known after apply) 2026-04-13 00:02:23.497213 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.497217 | orchestrator | + pool = "public" 2026-04-13 00:02:23.497221 | orchestrator | + port_id = (known after apply) 2026-04-13 00:02:23.497224 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.497228 | orchestrator | + subnet_id = (known after apply) 2026-04-13 00:02:23.497232 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.497236 | orchestrator | } 2026-04-13 00:02:23.497241 | orchestrator | 2026-04-13 00:02:23.497245 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-04-13 00:02:23.497249 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-04-13 00:02:23.497252 | orchestrator | + admin_state_up = (known after apply) 2026-04-13 00:02:23.497256 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.497260 | orchestrator | + availability_zone_hints = [ 2026-04-13 00:02:23.497263 | orchestrator | + "nova", 2026-04-13 00:02:23.497267 | orchestrator | ] 2026-04-13 00:02:23.497271 | orchestrator | + dns_domain = (known after apply) 2026-04-13 00:02:23.497275 | orchestrator | + external = (known after apply) 2026-04-13 00:02:23.497278 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.497282 | orchestrator | + mtu = (known after apply) 2026-04-13 00:02:23.497286 | orchestrator | + name = "net-testbed-management" 2026-04-13 00:02:23.497289 | orchestrator | + port_security_enabled = (known after apply) 2026-04-13 00:02:23.497298 | orchestrator | + qos_policy_id = (known after apply) 2026-04-13 00:02:23.497301 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.497305 | orchestrator | + shared = (known after apply) 2026-04-13 00:02:23.497309 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.497313 | orchestrator | + transparent_vlan = (known after apply) 2026-04-13 00:02:23.497316 | orchestrator | 2026-04-13 00:02:23.497320 | orchestrator | + segments (known after apply) 2026-04-13 00:02:23.497324 | orchestrator | } 2026-04-13 00:02:23.497355 | orchestrator | 2026-04-13 00:02:23.497360 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-04-13 00:02:23.497364 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-04-13 00:02:23.497368 | orchestrator | + admin_state_up = (known after apply) 2026-04-13 00:02:23.497371 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-13 00:02:23.497375 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-13 00:02:23.497382 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.497385 | orchestrator | + device_id = (known after apply) 2026-04-13 00:02:23.497389 | orchestrator | + device_owner = (known after apply) 2026-04-13 00:02:23.497393 | orchestrator | + dns_assignment = (known after apply) 2026-04-13 00:02:23.497396 | orchestrator | + dns_name = (known after apply) 2026-04-13 00:02:23.497400 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.497404 | orchestrator | + mac_address = (known after apply) 2026-04-13 00:02:23.497407 | orchestrator | + network_id = (known after apply) 2026-04-13 00:02:23.497411 | orchestrator | + port_security_enabled = (known after apply) 2026-04-13 00:02:23.497415 | orchestrator | + qos_policy_id = (known after apply) 2026-04-13 00:02:23.497418 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.497422 | orchestrator | + security_group_ids = (known after apply) 2026-04-13 00:02:23.497426 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.497429 | orchestrator | 2026-04-13 00:02:23.497433 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.497437 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-13 00:02:23.497440 | orchestrator | } 2026-04-13 00:02:23.497444 | orchestrator | 2026-04-13 00:02:23.497448 | orchestrator | + binding (known after apply) 2026-04-13 00:02:23.497465 | orchestrator | 2026-04-13 00:02:23.497469 | orchestrator | + fixed_ip { 2026-04-13 00:02:23.497473 | orchestrator | + ip_address = "192.168.16.5" 2026-04-13 00:02:23.497477 | orchestrator | + subnet_id = (known after apply) 2026-04-13 00:02:23.497480 | orchestrator | } 2026-04-13 00:02:23.497484 | orchestrator | } 2026-04-13 00:02:23.497529 | orchestrator | 2026-04-13 00:02:23.497534 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-04-13 00:02:23.497538 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-13 00:02:23.497541 | orchestrator | + admin_state_up = (known after apply) 2026-04-13 00:02:23.497545 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-13 00:02:23.497549 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-13 00:02:23.497553 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.497557 | orchestrator | + device_id = (known after apply) 2026-04-13 00:02:23.497561 | orchestrator | + device_owner = (known after apply) 2026-04-13 00:02:23.497564 | orchestrator | + dns_assignment = (known after apply) 2026-04-13 00:02:23.497568 | orchestrator | + dns_name = (known after apply) 2026-04-13 00:02:23.497572 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.497576 | orchestrator | + mac_address = (known after apply) 2026-04-13 00:02:23.497579 | orchestrator | + network_id = (known after apply) 2026-04-13 00:02:23.497583 | orchestrator | + port_security_enabled = (known after apply) 2026-04-13 00:02:23.497587 | orchestrator | + qos_policy_id = (known after apply) 2026-04-13 00:02:23.497599 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.497607 | orchestrator | + security_group_ids = (known after apply) 2026-04-13 00:02:23.497611 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.497614 | orchestrator | 2026-04-13 00:02:23.497618 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.497622 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-13 00:02:23.497625 | orchestrator | } 2026-04-13 00:02:23.497629 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.497633 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-13 00:02:23.497637 | orchestrator | } 2026-04-13 00:02:23.497640 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.497644 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-13 00:02:23.497648 | orchestrator | } 2026-04-13 00:02:23.497651 | orchestrator | 2026-04-13 00:02:23.497655 | orchestrator | + binding (known after apply) 2026-04-13 00:02:23.497659 | orchestrator | 2026-04-13 00:02:23.497663 | orchestrator | + fixed_ip { 2026-04-13 00:02:23.497666 | orchestrator | + ip_address = "192.168.16.10" 2026-04-13 00:02:23.497670 | orchestrator | + subnet_id = (known after apply) 2026-04-13 00:02:23.497674 | orchestrator | } 2026-04-13 00:02:23.497678 | orchestrator | } 2026-04-13 00:02:23.497683 | orchestrator | 2026-04-13 00:02:23.497687 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-04-13 00:02:23.497691 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-13 00:02:23.497695 | orchestrator | + admin_state_up = (known after apply) 2026-04-13 00:02:23.497698 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-13 00:02:23.497702 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-13 00:02:23.497706 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.497709 | orchestrator | + device_id = (known after apply) 2026-04-13 00:02:23.497713 | orchestrator | + device_owner = (known after apply) 2026-04-13 00:02:23.497717 | orchestrator | + dns_assignment = (known after apply) 2026-04-13 00:02:23.497721 | orchestrator | + dns_name = (known after apply) 2026-04-13 00:02:23.497724 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.497728 | orchestrator | + mac_address = (known after apply) 2026-04-13 00:02:23.497732 | orchestrator | + network_id = (known after apply) 2026-04-13 00:02:23.497736 | orchestrator | + port_security_enabled = (known after apply) 2026-04-13 00:02:23.497739 | orchestrator | + qos_policy_id = (known after apply) 2026-04-13 00:02:23.497743 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.497747 | orchestrator | + security_group_ids = (known after apply) 2026-04-13 00:02:23.497750 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.497754 | orchestrator | 2026-04-13 00:02:23.497758 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.497762 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-13 00:02:23.497765 | orchestrator | } 2026-04-13 00:02:23.497769 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.497773 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-13 00:02:23.497777 | orchestrator | } 2026-04-13 00:02:23.497780 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.497784 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-13 00:02:23.497788 | orchestrator | } 2026-04-13 00:02:23.497792 | orchestrator | 2026-04-13 00:02:23.497795 | orchestrator | + binding (known after apply) 2026-04-13 00:02:23.497799 | orchestrator | 2026-04-13 00:02:23.497803 | orchestrator | + fixed_ip { 2026-04-13 00:02:23.497807 | orchestrator | + ip_address = "192.168.16.11" 2026-04-13 00:02:23.497810 | orchestrator | + subnet_id = (known after apply) 2026-04-13 00:02:23.497814 | orchestrator | } 2026-04-13 00:02:23.497818 | orchestrator | } 2026-04-13 00:02:23.497823 | orchestrator | 2026-04-13 00:02:23.497827 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-04-13 00:02:23.497830 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-13 00:02:23.497834 | orchestrator | + admin_state_up = (known after apply) 2026-04-13 00:02:23.497838 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-13 00:02:23.497842 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-13 00:02:23.497845 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.497852 | orchestrator | + device_id = (known after apply) 2026-04-13 00:02:23.497856 | orchestrator | + device_owner = (known after apply) 2026-04-13 00:02:23.497860 | orchestrator | + dns_assignment = (known after apply) 2026-04-13 00:02:23.497863 | orchestrator | + dns_name = (known after apply) 2026-04-13 00:02:23.497870 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.497874 | orchestrator | + mac_address = (known after apply) 2026-04-13 00:02:23.497877 | orchestrator | + network_id = (known after apply) 2026-04-13 00:02:23.497881 | orchestrator | + port_security_enabled = (known after apply) 2026-04-13 00:02:23.497885 | orchestrator | + qos_policy_id = (known after apply) 2026-04-13 00:02:23.497889 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.497892 | orchestrator | + security_group_ids = (known after apply) 2026-04-13 00:02:23.497896 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.497900 | orchestrator | 2026-04-13 00:02:23.497903 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.497907 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-13 00:02:23.497911 | orchestrator | } 2026-04-13 00:02:23.497915 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.497918 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-13 00:02:23.497922 | orchestrator | } 2026-04-13 00:02:23.497926 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.497929 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-13 00:02:23.497933 | orchestrator | } 2026-04-13 00:02:23.497937 | orchestrator | 2026-04-13 00:02:23.497941 | orchestrator | + binding (known after apply) 2026-04-13 00:02:23.497944 | orchestrator | 2026-04-13 00:02:23.497948 | orchestrator | + fixed_ip { 2026-04-13 00:02:23.497952 | orchestrator | + ip_address = "192.168.16.12" 2026-04-13 00:02:23.497956 | orchestrator | + subnet_id = (known after apply) 2026-04-13 00:02:23.497959 | orchestrator | } 2026-04-13 00:02:23.497963 | orchestrator | } 2026-04-13 00:02:23.497968 | orchestrator | 2026-04-13 00:02:23.497972 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-04-13 00:02:23.497976 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-13 00:02:23.497980 | orchestrator | + admin_state_up = (known after apply) 2026-04-13 00:02:23.497983 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-13 00:02:23.497987 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-13 00:02:23.497991 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.497995 | orchestrator | + device_id = (known after apply) 2026-04-13 00:02:23.497998 | orchestrator | + device_owner = (known after apply) 2026-04-13 00:02:23.498002 | orchestrator | + dns_assignment = (known after apply) 2026-04-13 00:02:23.498006 | orchestrator | + dns_name = (known after apply) 2026-04-13 00:02:23.498009 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.498057 | orchestrator | + mac_address = (known after apply) 2026-04-13 00:02:23.498062 | orchestrator | + network_id = (known after apply) 2026-04-13 00:02:23.498066 | orchestrator | + port_security_enabled = (known after apply) 2026-04-13 00:02:23.498070 | orchestrator | + qos_policy_id = (known after apply) 2026-04-13 00:02:23.498073 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.498077 | orchestrator | + security_group_ids = (known after apply) 2026-04-13 00:02:23.498081 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.498084 | orchestrator | 2026-04-13 00:02:23.498088 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.498092 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-13 00:02:23.498095 | orchestrator | } 2026-04-13 00:02:23.498099 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.498103 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-13 00:02:23.498106 | orchestrator | } 2026-04-13 00:02:23.498110 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.498114 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-13 00:02:23.498126 | orchestrator | } 2026-04-13 00:02:23.498130 | orchestrator | 2026-04-13 00:02:23.498139 | orchestrator | + binding (known after apply) 2026-04-13 00:02:23.498143 | orchestrator | 2026-04-13 00:02:23.498147 | orchestrator | + fixed_ip { 2026-04-13 00:02:23.498150 | orchestrator | + ip_address = "192.168.16.13" 2026-04-13 00:02:23.498154 | orchestrator | + subnet_id = (known after apply) 2026-04-13 00:02:23.498158 | orchestrator | } 2026-04-13 00:02:23.498162 | orchestrator | } 2026-04-13 00:02:23.498168 | orchestrator | 2026-04-13 00:02:23.498172 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-04-13 00:02:23.498175 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-13 00:02:23.498179 | orchestrator | + admin_state_up = (known after apply) 2026-04-13 00:02:23.498183 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-13 00:02:23.498186 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-13 00:02:23.498190 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.498194 | orchestrator | + device_id = (known after apply) 2026-04-13 00:02:23.498197 | orchestrator | + device_owner = (known after apply) 2026-04-13 00:02:23.498201 | orchestrator | + dns_assignment = (known after apply) 2026-04-13 00:02:23.498205 | orchestrator | + dns_name = (known after apply) 2026-04-13 00:02:23.498208 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.498212 | orchestrator | + mac_address = (known after apply) 2026-04-13 00:02:23.498216 | orchestrator | + network_id = (known after apply) 2026-04-13 00:02:23.498220 | orchestrator | + port_security_enabled = (known after apply) 2026-04-13 00:02:23.498223 | orchestrator | + qos_policy_id = (known after apply) 2026-04-13 00:02:23.498227 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.498231 | orchestrator | + security_group_ids = (known after apply) 2026-04-13 00:02:23.498235 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.498239 | orchestrator | 2026-04-13 00:02:23.498243 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.498246 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-13 00:02:23.498250 | orchestrator | } 2026-04-13 00:02:23.498254 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.498258 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-13 00:02:23.498261 | orchestrator | } 2026-04-13 00:02:23.498265 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.498269 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-13 00:02:23.498272 | orchestrator | } 2026-04-13 00:02:23.498276 | orchestrator | 2026-04-13 00:02:23.498280 | orchestrator | + binding (known after apply) 2026-04-13 00:02:23.498284 | orchestrator | 2026-04-13 00:02:23.498287 | orchestrator | + fixed_ip { 2026-04-13 00:02:23.498291 | orchestrator | + ip_address = "192.168.16.14" 2026-04-13 00:02:23.498295 | orchestrator | + subnet_id = (known after apply) 2026-04-13 00:02:23.498299 | orchestrator | } 2026-04-13 00:02:23.498302 | orchestrator | } 2026-04-13 00:02:23.498308 | orchestrator | 2026-04-13 00:02:23.498312 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-04-13 00:02:23.498316 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-13 00:02:23.498319 | orchestrator | + admin_state_up = (known after apply) 2026-04-13 00:02:23.498323 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-13 00:02:23.498327 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-13 00:02:23.498331 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.498334 | orchestrator | + device_id = (known after apply) 2026-04-13 00:02:23.498338 | orchestrator | + device_owner = (known after apply) 2026-04-13 00:02:23.498342 | orchestrator | + dns_assignment = (known after apply) 2026-04-13 00:02:23.498345 | orchestrator | + dns_name = (known after apply) 2026-04-13 00:02:23.498349 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.498353 | orchestrator | + mac_address = (known after apply) 2026-04-13 00:02:23.498357 | orchestrator | + network_id = (known after apply) 2026-04-13 00:02:23.498360 | orchestrator | + port_security_enabled = (known after apply) 2026-04-13 00:02:23.498364 | orchestrator | + qos_policy_id = (known after apply) 2026-04-13 00:02:23.498371 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.498375 | orchestrator | + security_group_ids = (known after apply) 2026-04-13 00:02:23.498378 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.498382 | orchestrator | 2026-04-13 00:02:23.498386 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.498389 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-13 00:02:23.498393 | orchestrator | } 2026-04-13 00:02:23.498397 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.498400 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-13 00:02:23.498404 | orchestrator | } 2026-04-13 00:02:23.498408 | orchestrator | + allowed_address_pairs { 2026-04-13 00:02:23.498412 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-13 00:02:23.498416 | orchestrator | } 2026-04-13 00:02:23.498419 | orchestrator | 2026-04-13 00:02:23.498426 | orchestrator | + binding (known after apply) 2026-04-13 00:02:23.498429 | orchestrator | 2026-04-13 00:02:23.498433 | orchestrator | + fixed_ip { 2026-04-13 00:02:23.498437 | orchestrator | + ip_address = "192.168.16.15" 2026-04-13 00:02:23.498441 | orchestrator | + subnet_id = (known after apply) 2026-04-13 00:02:23.498444 | orchestrator | } 2026-04-13 00:02:23.498448 | orchestrator | } 2026-04-13 00:02:23.498476 | orchestrator | 2026-04-13 00:02:23.498480 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-04-13 00:02:23.498484 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-04-13 00:02:23.498488 | orchestrator | + force_destroy = false 2026-04-13 00:02:23.498492 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.498496 | orchestrator | + port_id = (known after apply) 2026-04-13 00:02:23.498499 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.498503 | orchestrator | + router_id = (known after apply) 2026-04-13 00:02:23.498507 | orchestrator | + subnet_id = (known after apply) 2026-04-13 00:02:23.498510 | orchestrator | } 2026-04-13 00:02:23.498514 | orchestrator | 2026-04-13 00:02:23.498518 | orchestrator | # openstack_networking_router_v2.router will be created 2026-04-13 00:02:23.498522 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-04-13 00:02:23.498525 | orchestrator | + admin_state_up = (known after apply) 2026-04-13 00:02:23.498529 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.498533 | orchestrator | + availability_zone_hints = [ 2026-04-13 00:02:23.498537 | orchestrator | + "nova", 2026-04-13 00:02:23.498540 | orchestrator | ] 2026-04-13 00:02:23.498544 | orchestrator | + distributed = (known after apply) 2026-04-13 00:02:23.498548 | orchestrator | + enable_snat = (known after apply) 2026-04-13 00:02:23.498551 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-04-13 00:02:23.498555 | orchestrator | + external_qos_policy_id = (known after apply) 2026-04-13 00:02:23.498559 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.498563 | orchestrator | + name = "testbed" 2026-04-13 00:02:23.498566 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.498570 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.498574 | orchestrator | 2026-04-13 00:02:23.498578 | orchestrator | + external_fixed_ip (known after apply) 2026-04-13 00:02:23.498581 | orchestrator | } 2026-04-13 00:02:23.498587 | orchestrator | 2026-04-13 00:02:23.498591 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-04-13 00:02:23.498596 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-04-13 00:02:23.498600 | orchestrator | + description = "ssh" 2026-04-13 00:02:23.498603 | orchestrator | + direction = "ingress" 2026-04-13 00:02:23.498607 | orchestrator | + ethertype = "IPv4" 2026-04-13 00:02:23.498611 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.498615 | orchestrator | + port_range_max = 22 2026-04-13 00:02:23.498618 | orchestrator | + port_range_min = 22 2026-04-13 00:02:23.498622 | orchestrator | + protocol = "tcp" 2026-04-13 00:02:23.498626 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.498634 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-13 00:02:23.498638 | orchestrator | + remote_group_id = (known after apply) 2026-04-13 00:02:23.498641 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-13 00:02:23.498645 | orchestrator | + security_group_id = (known after apply) 2026-04-13 00:02:23.498649 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.498652 | orchestrator | } 2026-04-13 00:02:23.498656 | orchestrator | 2026-04-13 00:02:23.498660 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-04-13 00:02:23.498664 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-04-13 00:02:23.498667 | orchestrator | + description = "wireguard" 2026-04-13 00:02:23.498671 | orchestrator | + direction = "ingress" 2026-04-13 00:02:23.498675 | orchestrator | + ethertype = "IPv4" 2026-04-13 00:02:23.498678 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.498682 | orchestrator | + port_range_max = 51820 2026-04-13 00:02:23.498686 | orchestrator | + port_range_min = 51820 2026-04-13 00:02:23.498690 | orchestrator | + protocol = "udp" 2026-04-13 00:02:23.498693 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.498697 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-13 00:02:23.498701 | orchestrator | + remote_group_id = (known after apply) 2026-04-13 00:02:23.498704 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-13 00:02:23.498708 | orchestrator | + security_group_id = (known after apply) 2026-04-13 00:02:23.498712 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.498715 | orchestrator | } 2026-04-13 00:02:23.498719 | orchestrator | 2026-04-13 00:02:23.498723 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-04-13 00:02:23.498727 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-04-13 00:02:23.498730 | orchestrator | + direction = "ingress" 2026-04-13 00:02:23.498734 | orchestrator | + ethertype = "IPv4" 2026-04-13 00:02:23.498738 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.498742 | orchestrator | + protocol = "tcp" 2026-04-13 00:02:23.498745 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.498749 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-13 00:02:23.498753 | orchestrator | + remote_group_id = (known after apply) 2026-04-13 00:02:23.498756 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-13 00:02:23.498760 | orchestrator | + security_group_id = (known after apply) 2026-04-13 00:02:23.498764 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.498767 | orchestrator | } 2026-04-13 00:02:23.498771 | orchestrator | 2026-04-13 00:02:23.498775 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-04-13 00:02:23.498779 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-04-13 00:02:23.498783 | orchestrator | + direction = "ingress" 2026-04-13 00:02:23.498786 | orchestrator | + ethertype = "IPv4" 2026-04-13 00:02:23.498790 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.498794 | orchestrator | + protocol = "udp" 2026-04-13 00:02:23.498798 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.498802 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-13 00:02:23.498805 | orchestrator | + remote_group_id = (known after apply) 2026-04-13 00:02:23.498809 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-13 00:02:23.498813 | orchestrator | + security_group_id = (known after apply) 2026-04-13 00:02:23.498816 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.498820 | orchestrator | } 2026-04-13 00:02:23.498824 | orchestrator | 2026-04-13 00:02:23.498827 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-04-13 00:02:23.498834 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-04-13 00:02:23.498838 | orchestrator | + direction = "ingress" 2026-04-13 00:02:23.498842 | orchestrator | + ethertype = "IPv4" 2026-04-13 00:02:23.498845 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.498849 | orchestrator | + protocol = "icmp" 2026-04-13 00:02:23.498853 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.498856 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-13 00:02:23.498860 | orchestrator | + remote_group_id = (known after apply) 2026-04-13 00:02:23.498864 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-13 00:02:23.498868 | orchestrator | + security_group_id = (known after apply) 2026-04-13 00:02:23.498871 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.498875 | orchestrator | } 2026-04-13 00:02:23.498881 | orchestrator | 2026-04-13 00:02:23.498884 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-04-13 00:02:23.498888 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-04-13 00:02:23.498892 | orchestrator | + direction = "ingress" 2026-04-13 00:02:23.498896 | orchestrator | + ethertype = "IPv4" 2026-04-13 00:02:23.498899 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.498903 | orchestrator | + protocol = "tcp" 2026-04-13 00:02:23.498907 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.498911 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-13 00:02:23.498917 | orchestrator | + remote_group_id = (known after apply) 2026-04-13 00:02:23.498921 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-13 00:02:23.498924 | orchestrator | + security_group_id = (known after apply) 2026-04-13 00:02:23.498928 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.498932 | orchestrator | } 2026-04-13 00:02:23.498936 | orchestrator | 2026-04-13 00:02:23.498939 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-04-13 00:02:23.498943 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-04-13 00:02:23.498947 | orchestrator | + direction = "ingress" 2026-04-13 00:02:23.498951 | orchestrator | + ethertype = "IPv4" 2026-04-13 00:02:23.498954 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.498958 | orchestrator | + protocol = "udp" 2026-04-13 00:02:23.498962 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.498965 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-13 00:02:23.498969 | orchestrator | + remote_group_id = (known after apply) 2026-04-13 00:02:23.498973 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-13 00:02:23.498977 | orchestrator | + security_group_id = (known after apply) 2026-04-13 00:02:23.498980 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.498984 | orchestrator | } 2026-04-13 00:02:23.498988 | orchestrator | 2026-04-13 00:02:23.498992 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-04-13 00:02:23.498995 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-04-13 00:02:23.498999 | orchestrator | + direction = "ingress" 2026-04-13 00:02:23.499005 | orchestrator | + ethertype = "IPv4" 2026-04-13 00:02:23.499009 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.499012 | orchestrator | + protocol = "icmp" 2026-04-13 00:02:23.499016 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.499020 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-13 00:02:23.499024 | orchestrator | + remote_group_id = (known after apply) 2026-04-13 00:02:23.499027 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-13 00:02:23.499031 | orchestrator | + security_group_id = (known after apply) 2026-04-13 00:02:23.499035 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.499042 | orchestrator | } 2026-04-13 00:02:23.499046 | orchestrator | 2026-04-13 00:02:23.499049 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-04-13 00:02:23.499053 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-04-13 00:02:23.499057 | orchestrator | + description = "vrrp" 2026-04-13 00:02:23.499061 | orchestrator | + direction = "ingress" 2026-04-13 00:02:23.499065 | orchestrator | + ethertype = "IPv4" 2026-04-13 00:02:23.499068 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.499072 | orchestrator | + protocol = "112" 2026-04-13 00:02:23.499076 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.499079 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-13 00:02:23.499083 | orchestrator | + remote_group_id = (known after apply) 2026-04-13 00:02:23.499087 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-13 00:02:23.499091 | orchestrator | + security_group_id = (known after apply) 2026-04-13 00:02:23.499094 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.499098 | orchestrator | } 2026-04-13 00:02:23.499102 | orchestrator | 2026-04-13 00:02:23.499106 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-04-13 00:02:23.499110 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-04-13 00:02:23.499114 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.499117 | orchestrator | + description = "management security group" 2026-04-13 00:02:23.499121 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.499125 | orchestrator | + name = "testbed-management" 2026-04-13 00:02:23.499128 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.499132 | orchestrator | + stateful = (known after apply) 2026-04-13 00:02:23.499136 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.499139 | orchestrator | } 2026-04-13 00:02:23.499143 | orchestrator | 2026-04-13 00:02:23.499147 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-04-13 00:02:23.499151 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-04-13 00:02:23.499154 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.499158 | orchestrator | + description = "node security group" 2026-04-13 00:02:23.499162 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.499166 | orchestrator | + name = "testbed-node" 2026-04-13 00:02:23.499169 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.499173 | orchestrator | + stateful = (known after apply) 2026-04-13 00:02:23.499177 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.499180 | orchestrator | } 2026-04-13 00:02:23.499186 | orchestrator | 2026-04-13 00:02:23.499190 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-04-13 00:02:23.499193 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-04-13 00:02:23.499197 | orchestrator | + all_tags = (known after apply) 2026-04-13 00:02:23.499201 | orchestrator | + cidr = "192.168.16.0/20" 2026-04-13 00:02:23.499204 | orchestrator | + dns_nameservers = [ 2026-04-13 00:02:23.499208 | orchestrator | + "8.8.8.8", 2026-04-13 00:02:23.499212 | orchestrator | + "9.9.9.9", 2026-04-13 00:02:23.499216 | orchestrator | ] 2026-04-13 00:02:23.499220 | orchestrator | + enable_dhcp = true 2026-04-13 00:02:23.499223 | orchestrator | + gateway_ip = (known after apply) 2026-04-13 00:02:23.499227 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.499231 | orchestrator | + ip_version = 4 2026-04-13 00:02:23.499235 | orchestrator | + ipv6_address_mode = (known after apply) 2026-04-13 00:02:23.499238 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-04-13 00:02:23.499242 | orchestrator | + name = "subnet-testbed-management" 2026-04-13 00:02:23.499246 | orchestrator | + network_id = (known after apply) 2026-04-13 00:02:23.499250 | orchestrator | + no_gateway = false 2026-04-13 00:02:23.499253 | orchestrator | + region = (known after apply) 2026-04-13 00:02:23.499257 | orchestrator | + service_types = (known after apply) 2026-04-13 00:02:23.499264 | orchestrator | + tenant_id = (known after apply) 2026-04-13 00:02:23.499268 | orchestrator | 2026-04-13 00:02:23.499272 | orchestrator | + allocation_pool { 2026-04-13 00:02:23.499276 | orchestrator | + end = "192.168.31.250" 2026-04-13 00:02:23.499279 | orchestrator | + start = "192.168.31.200" 2026-04-13 00:02:23.499283 | orchestrator | } 2026-04-13 00:02:23.499287 | orchestrator | } 2026-04-13 00:02:23.499290 | orchestrator | 2026-04-13 00:02:23.499294 | orchestrator | # terraform_data.image will be created 2026-04-13 00:02:23.499298 | orchestrator | + resource "terraform_data" "image" { 2026-04-13 00:02:23.499302 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.499305 | orchestrator | + input = "Ubuntu 24.04" 2026-04-13 00:02:23.499309 | orchestrator | + output = (known after apply) 2026-04-13 00:02:23.499313 | orchestrator | } 2026-04-13 00:02:23.499316 | orchestrator | 2026-04-13 00:02:23.499320 | orchestrator | # terraform_data.image_node will be created 2026-04-13 00:02:23.499324 | orchestrator | + resource "terraform_data" "image_node" { 2026-04-13 00:02:23.499328 | orchestrator | + id = (known after apply) 2026-04-13 00:02:23.499331 | orchestrator | + input = "Ubuntu 24.04" 2026-04-13 00:02:23.499335 | orchestrator | + output = (known after apply) 2026-04-13 00:02:23.499339 | orchestrator | } 2026-04-13 00:02:23.499343 | orchestrator | 2026-04-13 00:02:23.499346 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-04-13 00:02:23.499350 | orchestrator | 2026-04-13 00:02:23.499354 | orchestrator | Changes to Outputs: 2026-04-13 00:02:23.499357 | orchestrator | + manager_address = (sensitive value) 2026-04-13 00:02:23.499361 | orchestrator | + private_key = (sensitive value) 2026-04-13 00:02:23.680940 | orchestrator | terraform_data.image_node: Creating... 2026-04-13 00:02:23.681209 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=aa611b4e-d5dc-602c-c9ef-0d2ef7466660] 2026-04-13 00:02:23.741374 | orchestrator | terraform_data.image: Creating... 2026-04-13 00:02:23.741436 | orchestrator | terraform_data.image: Creation complete after 0s [id=5deac2c0-17e2-a0a1-040a-6b0837e48ea2] 2026-04-13 00:02:23.758100 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-04-13 00:02:23.760466 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-04-13 00:02:23.765892 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-04-13 00:02:23.775972 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-04-13 00:02:23.778734 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-04-13 00:02:23.778778 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-04-13 00:02:23.784982 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-04-13 00:02:23.794721 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-04-13 00:02:23.794764 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-04-13 00:02:23.827698 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-04-13 00:02:24.258067 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-13 00:02:24.265748 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-04-13 00:02:24.314248 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-04-13 00:02:24.323007 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-04-13 00:02:24.325515 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-13 00:02:24.334102 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-04-13 00:02:25.143715 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=e71f64df-7ee7-4185-98ea-f9cd4c9689d7] 2026-04-13 00:02:25.153550 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-04-13 00:02:27.619995 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=1ff476bc-ae0b-4cfd-96fa-c57a101f59cb] 2026-04-13 00:02:27.639844 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-04-13 00:02:27.645036 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=e58cc4cd-c100-42fd-a854-9a07c2c5ceb1] 2026-04-13 00:02:27.647343 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=476877b96659840eacabd2c71d0040bc3ab839c7] 2026-04-13 00:02:27.651959 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-04-13 00:02:27.657546 | orchestrator | local_file.id_rsa_pub: Creating... 2026-04-13 00:02:27.682550 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=7652ab20f6a506737a391d3c35827cb8e981324d] 2026-04-13 00:02:27.694981 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-04-13 00:02:27.699660 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=28faf471-35fc-493f-ba87-763b98edc4d7] 2026-04-13 00:02:27.722730 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-04-13 00:02:27.722923 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=70b2b286-75d2-4918-b809-b0d3c77d8089] 2026-04-13 00:02:27.722989 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=5e205b26-74df-4a0d-a6b0-fd65d84e1df5] 2026-04-13 00:02:27.729893 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-04-13 00:02:27.730624 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-04-13 00:02:27.747234 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=40b67a78-e903-4b7b-9416-2311a13eed69] 2026-04-13 00:02:27.747403 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=d506fd3a-4f98-4a08-a2bf-c3638f88932b] 2026-04-13 00:02:27.751910 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-04-13 00:02:27.752137 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-04-13 00:02:27.792613 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=3fbef31d-44a1-4ae9-9145-86033c094687] 2026-04-13 00:02:27.843401 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=2d6b0ac7-37bd-44a3-98bf-24bee37418a9] 2026-04-13 00:02:28.619219 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=91680486-ff8c-4984-80b4-85148b6ff46e] 2026-04-13 00:02:28.964289 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=e01445aa-1c83-43d7-b17e-f53a7f327d81] 2026-04-13 00:02:28.971726 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-04-13 00:02:31.261081 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=dea7381b-ad1e-42b1-98ca-267bdb7db168] 2026-04-13 00:02:31.316779 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=cdc8ba01-4f9f-45f9-bedc-50cd21a5940b] 2026-04-13 00:02:31.368858 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=97e6821d-280a-4d97-99b4-3ef3a3e75d06] 2026-04-13 00:02:31.382783 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=10c37310-1140-4628-b353-2a1f2074e1b5] 2026-04-13 00:02:31.387768 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=2cf32096-6de7-4248-ae06-d0996d3d3c8b] 2026-04-13 00:02:31.401960 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=864d1fd1-7283-4358-a23f-be2c6ef28191] 2026-04-13 00:02:31.822851 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=9ed81ea6-679d-4480-8dda-209223e8948d] 2026-04-13 00:02:31.836337 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-04-13 00:02:31.836424 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-04-13 00:02:31.836436 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-04-13 00:02:32.059251 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=f66b0e65-950f-4b9d-b9fb-1b77fea16f51] 2026-04-13 00:02:32.069140 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-04-13 00:02:32.069510 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-04-13 00:02:32.069538 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-04-13 00:02:32.077258 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-04-13 00:02:32.077437 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-04-13 00:02:32.078838 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-04-13 00:02:32.084787 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-04-13 00:02:32.085651 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-04-13 00:02:32.247578 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=37b80063-dc98-4cae-9921-310fff93214f] 2026-04-13 00:02:32.263247 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-04-13 00:02:32.323122 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=5637d010-7af0-40bc-8143-cc2ea1aa40a0] 2026-04-13 00:02:32.334704 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-04-13 00:02:32.797316 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=e835b4fb-5b4b-4779-9d4a-be1dd85774ad] 2026-04-13 00:02:32.804554 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-04-13 00:02:32.844484 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=3df93b35-b7fd-4c83-942a-2afb77904698] 2026-04-13 00:02:32.849589 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-04-13 00:02:32.917162 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=7354dad6-e901-406d-b5fb-0f87ec4534c6] 2026-04-13 00:02:32.923047 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-04-13 00:02:33.002073 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=a1e4858d-c2ff-4fbe-b98a-5e77db880bf8] 2026-04-13 00:02:33.008538 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=e331f814-ac8f-4ae9-aff0-c700aa90482e] 2026-04-13 00:02:33.010633 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-04-13 00:02:33.017750 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-04-13 00:02:33.090323 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=69424e8c-a598-4676-b355-e43b322f572d] 2026-04-13 00:02:33.097375 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-04-13 00:02:33.253415 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=a3223aad-97e8-4ce6-8f43-26a6fa257e4e] 2026-04-13 00:02:33.340571 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=0fd9be42-9555-47e3-8055-1d6ba4097a0e] 2026-04-13 00:02:33.466702 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=9d5ecd71-b993-491c-acbd-169d230fc2b1] 2026-04-13 00:02:33.532102 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=178ae0b2-bbe5-4821-a45d-25861ffb95ad] 2026-04-13 00:02:33.636752 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=83b513d5-443e-4a9f-be68-5df5546b2e9b] 2026-04-13 00:02:33.732735 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=2751437d-344c-429e-8224-f20e875b87f4] 2026-04-13 00:02:33.894103 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 2s [id=4a19a40d-80b9-4ad1-95cd-8b425f819a62] 2026-04-13 00:02:33.902308 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=09493f04-b4aa-44a4-8017-237dbcf04744] 2026-04-13 00:02:34.114089 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 2s [id=108560b8-e179-4da8-8646-aad44b7686d8] 2026-04-13 00:02:35.136057 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=d39a0eb7-705f-406d-addd-be81f77b2e20] 2026-04-13 00:02:35.145001 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-04-13 00:02:35.165108 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-04-13 00:02:35.167050 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-04-13 00:02:35.179139 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-04-13 00:02:35.184171 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-04-13 00:02:35.188339 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-04-13 00:02:35.198565 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-04-13 00:02:39.773653 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 5s [id=a778be8d-3505-4d4c-92a0-62d88a23b6bf] 2026-04-13 00:02:39.782364 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-04-13 00:02:39.784792 | orchestrator | local_file.inventory: Creating... 2026-04-13 00:02:39.792192 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-04-13 00:02:39.792503 | orchestrator | local_file.inventory: Creation complete after 0s [id=7fe1032e4af1faa47e878ef64632d2a3308a0806] 2026-04-13 00:02:39.798770 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=8ddf6e54198db723747b7d014d07be84a0de935e] 2026-04-13 00:02:41.268862 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=a778be8d-3505-4d4c-92a0-62d88a23b6bf] 2026-04-13 00:02:45.167670 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-04-13 00:02:45.167818 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-04-13 00:02:45.180011 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-04-13 00:02:45.187310 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-04-13 00:02:45.189779 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-04-13 00:02:45.201151 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-04-13 00:02:55.175720 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-04-13 00:02:55.175838 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-04-13 00:02:55.181087 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-04-13 00:02:55.188354 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-04-13 00:02:55.190654 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-04-13 00:02:55.202083 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-04-13 00:02:56.303900 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 21s [id=50659160-5b09-48c0-944e-11d4506d3a65] 2026-04-13 00:03:05.183192 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-04-13 00:03:05.183305 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-04-13 00:03:05.189710 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-04-13 00:03:05.190865 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-04-13 00:03:05.202167 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-04-13 00:03:06.229688 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=5a22818e-7628-405e-8a01-69f1350a79f6] 2026-04-13 00:03:06.303188 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=2668b36f-d1cd-4f6e-b5ae-183e2602ece6] 2026-04-13 00:03:06.502887 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 32s [id=b0746498-42eb-4dc1-9c77-140173f45a7f] 2026-04-13 00:03:06.963282 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 32s [id=63b45469-6f91-4862-8d6f-034970e1b3ae] 2026-04-13 00:03:15.190849 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-04-13 00:03:18.077087 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 43s [id=502c7331-2494-41eb-855f-4ad287429e34] 2026-04-13 00:03:18.100915 | orchestrator | null_resource.node_semaphore: Creating... 2026-04-13 00:03:18.118613 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-04-13 00:03:18.122690 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-04-13 00:03:18.125206 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-04-13 00:03:18.125672 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-04-13 00:03:18.128734 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-04-13 00:03:18.128768 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-04-13 00:03:18.130901 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-04-13 00:03:18.131195 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=7612855048305356718] 2026-04-13 00:03:18.146992 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-04-13 00:03:18.149525 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-04-13 00:03:18.169817 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-04-13 00:03:21.799328 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=50659160-5b09-48c0-944e-11d4506d3a65/1ff476bc-ae0b-4cfd-96fa-c57a101f59cb] 2026-04-13 00:03:21.838261 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=b0746498-42eb-4dc1-9c77-140173f45a7f/d506fd3a-4f98-4a08-a2bf-c3638f88932b] 2026-04-13 00:03:21.857633 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=502c7331-2494-41eb-855f-4ad287429e34/40b67a78-e903-4b7b-9416-2311a13eed69] 2026-04-13 00:03:27.808735 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=502c7331-2494-41eb-855f-4ad287429e34/2d6b0ac7-37bd-44a3-98bf-24bee37418a9] 2026-04-13 00:03:27.895520 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=b0746498-42eb-4dc1-9c77-140173f45a7f/3fbef31d-44a1-4ae9-9145-86033c094687] 2026-04-13 00:03:28.119826 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=50659160-5b09-48c0-944e-11d4506d3a65/e58cc4cd-c100-42fd-a854-9a07c2c5ceb1] 2026-04-13 00:03:28.131589 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Still creating... [10s elapsed] 2026-04-13 00:03:28.148830 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Still creating... [10s elapsed] 2026-04-13 00:03:28.151090 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Still creating... [10s elapsed] 2026-04-13 00:03:28.157435 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 10s [id=b0746498-42eb-4dc1-9c77-140173f45a7f/5e205b26-74df-4a0d-a6b0-fd65d84e1df5] 2026-04-13 00:03:28.170992 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-04-13 00:03:28.178172 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=502c7331-2494-41eb-855f-4ad287429e34/28faf471-35fc-493f-ba87-763b98edc4d7] 2026-04-13 00:03:28.199109 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=50659160-5b09-48c0-944e-11d4506d3a65/70b2b286-75d2-4918-b809-b0d3c77d8089] 2026-04-13 00:03:38.180662 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-04-13 00:03:39.288204 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=6e54fcb7-e19b-475f-9241-035f469f5002] 2026-04-13 00:03:39.314445 | orchestrator | 2026-04-13 00:03:39.314588 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-04-13 00:03:39.314629 | orchestrator | 2026-04-13 00:03:39.314649 | orchestrator | Outputs: 2026-04-13 00:03:39.314667 | orchestrator | 2026-04-13 00:03:39.314685 | orchestrator | manager_address = 2026-04-13 00:03:39.314703 | orchestrator | private_key = 2026-04-13 00:03:39.745138 | orchestrator | ok: Runtime: 0:01:21.447151 2026-04-13 00:03:39.765463 | 2026-04-13 00:03:39.765584 | TASK [Fetch manager address] 2026-04-13 00:03:40.230652 | orchestrator | ok 2026-04-13 00:03:40.246192 | 2026-04-13 00:03:40.246355 | TASK [Set manager_host address] 2026-04-13 00:03:40.338913 | orchestrator | ok 2026-04-13 00:03:40.347789 | 2026-04-13 00:03:40.347916 | LOOP [Update ansible collections] 2026-04-13 00:03:41.516299 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-13 00:03:41.516676 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-13 00:03:41.516729 | orchestrator | Starting galaxy collection install process 2026-04-13 00:03:41.516759 | orchestrator | Process install dependency map 2026-04-13 00:03:41.516787 | orchestrator | Starting collection install process 2026-04-13 00:03:41.516811 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2026-04-13 00:03:41.516842 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2026-04-13 00:03:41.516873 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-04-13 00:03:41.516936 | orchestrator | ok: Item: commons Runtime: 0:00:00.801759 2026-04-13 00:03:42.403812 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-13 00:03:42.403947 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-13 00:03:42.403979 | orchestrator | Starting galaxy collection install process 2026-04-13 00:03:42.404004 | orchestrator | Process install dependency map 2026-04-13 00:03:42.404027 | orchestrator | Starting collection install process 2026-04-13 00:03:42.404049 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2026-04-13 00:03:42.404070 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2026-04-13 00:03:42.404090 | orchestrator | osism.services:999.0.0 was installed successfully 2026-04-13 00:03:42.404128 | orchestrator | ok: Item: services Runtime: 0:00:00.617890 2026-04-13 00:03:42.419310 | 2026-04-13 00:03:42.419461 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-13 00:03:53.017356 | orchestrator | ok 2026-04-13 00:03:53.025725 | 2026-04-13 00:03:53.025831 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-13 00:04:53.063244 | orchestrator | ok 2026-04-13 00:04:53.071354 | 2026-04-13 00:04:53.071478 | TASK [Fetch manager ssh hostkey] 2026-04-13 00:04:54.649577 | orchestrator | Output suppressed because no_log was given 2026-04-13 00:04:54.659441 | 2026-04-13 00:04:54.659582 | TASK [Get ssh keypair from terraform environment] 2026-04-13 00:04:55.194171 | orchestrator | ok: Runtime: 0:00:00.010001 2026-04-13 00:04:55.210090 | 2026-04-13 00:04:55.210248 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-13 00:04:55.244850 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-04-13 00:04:55.253617 | 2026-04-13 00:04:55.253740 | TASK [Run manager part 0] 2026-04-13 00:04:56.065300 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-13 00:04:56.105246 | orchestrator | 2026-04-13 00:04:56.105326 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-04-13 00:04:56.105340 | orchestrator | 2026-04-13 00:04:56.105369 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-04-13 00:04:57.842065 | orchestrator | ok: [testbed-manager] 2026-04-13 00:04:57.842126 | orchestrator | 2026-04-13 00:04:57.842151 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-13 00:04:57.842160 | orchestrator | 2026-04-13 00:04:57.842167 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-13 00:04:59.528063 | orchestrator | ok: [testbed-manager] 2026-04-13 00:04:59.528122 | orchestrator | 2026-04-13 00:04:59.528135 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-13 00:05:00.108605 | orchestrator | ok: [testbed-manager] 2026-04-13 00:05:00.108637 | orchestrator | 2026-04-13 00:05:00.108643 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-13 00:05:00.151455 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:05:00.151500 | orchestrator | 2026-04-13 00:05:00.151513 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-04-13 00:05:00.184666 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:05:00.184710 | orchestrator | 2026-04-13 00:05:00.184720 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-04-13 00:05:00.224723 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:05:00.224762 | orchestrator | 2026-04-13 00:05:00.224771 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-04-13 00:05:00.962577 | orchestrator | changed: [testbed-manager] 2026-04-13 00:05:00.962631 | orchestrator | 2026-04-13 00:05:00.962638 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-04-13 00:07:57.411308 | orchestrator | changed: [testbed-manager] 2026-04-13 00:07:57.411359 | orchestrator | 2026-04-13 00:07:57.411370 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-13 00:09:18.382092 | orchestrator | changed: [testbed-manager] 2026-04-13 00:09:18.382187 | orchestrator | 2026-04-13 00:09:18.382211 | orchestrator | TASK [Install required packages] *********************************************** 2026-04-13 00:09:40.972685 | orchestrator | changed: [testbed-manager] 2026-04-13 00:09:40.972734 | orchestrator | 2026-04-13 00:09:40.972745 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-04-13 00:09:50.461100 | orchestrator | changed: [testbed-manager] 2026-04-13 00:09:50.461140 | orchestrator | 2026-04-13 00:09:50.461146 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-13 00:09:50.505777 | orchestrator | ok: [testbed-manager] 2026-04-13 00:09:50.505856 | orchestrator | 2026-04-13 00:09:50.505870 | orchestrator | TASK [Get current user] ******************************************************** 2026-04-13 00:09:51.339354 | orchestrator | ok: [testbed-manager] 2026-04-13 00:09:51.339492 | orchestrator | 2026-04-13 00:09:51.339507 | orchestrator | TASK [Create venv directory] *************************************************** 2026-04-13 00:09:52.092279 | orchestrator | changed: [testbed-manager] 2026-04-13 00:09:52.092367 | orchestrator | 2026-04-13 00:09:52.092385 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-04-13 00:09:58.604675 | orchestrator | changed: [testbed-manager] 2026-04-13 00:09:58.604723 | orchestrator | 2026-04-13 00:09:58.604732 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-04-13 00:10:04.872669 | orchestrator | changed: [testbed-manager] 2026-04-13 00:10:04.872748 | orchestrator | 2026-04-13 00:10:04.872761 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-04-13 00:10:07.754585 | orchestrator | changed: [testbed-manager] 2026-04-13 00:10:07.754678 | orchestrator | 2026-04-13 00:10:07.754690 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-04-13 00:10:09.358287 | orchestrator | changed: [testbed-manager] 2026-04-13 00:10:09.358320 | orchestrator | 2026-04-13 00:10:09.358328 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-04-13 00:10:10.413151 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-13 00:10:10.413272 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-13 00:10:10.413298 | orchestrator | 2026-04-13 00:10:10.413323 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-04-13 00:10:10.459018 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-13 00:10:10.459106 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-13 00:10:10.459127 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-13 00:10:10.459148 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-13 00:10:13.750833 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-13 00:10:13.750878 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-13 00:10:13.750885 | orchestrator | 2026-04-13 00:10:13.750891 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-04-13 00:10:14.341311 | orchestrator | changed: [testbed-manager] 2026-04-13 00:10:14.341351 | orchestrator | 2026-04-13 00:10:14.341358 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-04-13 00:12:37.057456 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-04-13 00:12:37.057948 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-04-13 00:12:37.057974 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-04-13 00:12:37.057987 | orchestrator | 2026-04-13 00:12:37.057999 | orchestrator | TASK [Install local collections] *********************************************** 2026-04-13 00:12:39.504341 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-04-13 00:12:39.504451 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-04-13 00:12:39.504476 | orchestrator | 2026-04-13 00:12:39.504498 | orchestrator | PLAY [Create operator user] **************************************************** 2026-04-13 00:12:39.504519 | orchestrator | 2026-04-13 00:12:39.504562 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-13 00:12:40.972446 | orchestrator | ok: [testbed-manager] 2026-04-13 00:12:40.972513 | orchestrator | 2026-04-13 00:12:40.972521 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-13 00:12:41.014004 | orchestrator | ok: [testbed-manager] 2026-04-13 00:12:41.014146 | orchestrator | 2026-04-13 00:12:41.014162 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-13 00:12:41.090203 | orchestrator | ok: [testbed-manager] 2026-04-13 00:12:41.090294 | orchestrator | 2026-04-13 00:12:41.090311 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-13 00:12:41.875511 | orchestrator | changed: [testbed-manager] 2026-04-13 00:12:41.875614 | orchestrator | 2026-04-13 00:12:41.875634 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-13 00:12:42.659942 | orchestrator | changed: [testbed-manager] 2026-04-13 00:12:42.660000 | orchestrator | 2026-04-13 00:12:42.660007 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-13 00:12:44.469905 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-04-13 00:12:44.470086 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-04-13 00:12:44.470118 | orchestrator | 2026-04-13 00:12:44.470133 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-13 00:12:45.960339 | orchestrator | changed: [testbed-manager] 2026-04-13 00:12:45.960386 | orchestrator | 2026-04-13 00:12:45.960396 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-13 00:12:47.739733 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-04-13 00:12:47.739789 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-04-13 00:12:47.739810 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-04-13 00:12:47.739818 | orchestrator | 2026-04-13 00:12:47.739826 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-13 00:12:47.801966 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:12:47.802037 | orchestrator | 2026-04-13 00:12:47.802045 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-13 00:12:47.871473 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:12:47.871547 | orchestrator | 2026-04-13 00:12:47.871553 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-13 00:12:48.443580 | orchestrator | changed: [testbed-manager] 2026-04-13 00:12:48.443667 | orchestrator | 2026-04-13 00:12:48.443693 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-13 00:12:48.521770 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:12:48.521999 | orchestrator | 2026-04-13 00:12:48.522058 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-13 00:12:49.377826 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-13 00:12:49.377873 | orchestrator | changed: [testbed-manager] 2026-04-13 00:12:49.377882 | orchestrator | 2026-04-13 00:12:49.377890 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-13 00:12:49.421052 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:12:49.421140 | orchestrator | 2026-04-13 00:12:49.421157 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-13 00:12:49.456840 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:12:49.456925 | orchestrator | 2026-04-13 00:12:49.456944 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-13 00:12:49.487109 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:12:49.487192 | orchestrator | 2026-04-13 00:12:49.487208 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-13 00:12:49.559829 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:12:49.559881 | orchestrator | 2026-04-13 00:12:49.559887 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-13 00:12:50.314908 | orchestrator | ok: [testbed-manager] 2026-04-13 00:12:50.315139 | orchestrator | 2026-04-13 00:12:50.315158 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-13 00:12:50.315171 | orchestrator | 2026-04-13 00:12:50.315184 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-13 00:12:51.732184 | orchestrator | ok: [testbed-manager] 2026-04-13 00:12:51.732266 | orchestrator | 2026-04-13 00:12:51.732281 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-04-13 00:12:52.696240 | orchestrator | changed: [testbed-manager] 2026-04-13 00:12:52.696446 | orchestrator | 2026-04-13 00:12:52.696469 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:12:52.696483 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 2026-04-13 00:12:52.696494 | orchestrator | 2026-04-13 00:12:53.081443 | orchestrator | ok: Runtime: 0:07:57.266374 2026-04-13 00:12:53.100832 | 2026-04-13 00:12:53.101005 | TASK [Point out that the log in on the manager is now possible] 2026-04-13 00:12:53.140682 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-04-13 00:12:53.152019 | 2026-04-13 00:12:53.152157 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-13 00:12:53.192750 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-04-13 00:12:53.202265 | 2026-04-13 00:12:53.202436 | TASK [Run manager part 1 + 2] 2026-04-13 00:12:54.166886 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-13 00:12:54.235331 | orchestrator | 2026-04-13 00:12:54.235385 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-04-13 00:12:54.235392 | orchestrator | 2026-04-13 00:12:54.235406 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-13 00:12:57.338468 | orchestrator | ok: [testbed-manager] 2026-04-13 00:12:57.339605 | orchestrator | 2026-04-13 00:12:57.339651 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-04-13 00:12:57.380207 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:12:57.380252 | orchestrator | 2026-04-13 00:12:57.380260 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-13 00:12:57.421270 | orchestrator | ok: [testbed-manager] 2026-04-13 00:12:57.421327 | orchestrator | 2026-04-13 00:12:57.421337 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-13 00:12:57.458884 | orchestrator | ok: [testbed-manager] 2026-04-13 00:12:57.458936 | orchestrator | 2026-04-13 00:12:57.458944 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-13 00:12:57.545644 | orchestrator | ok: [testbed-manager] 2026-04-13 00:12:57.545702 | orchestrator | 2026-04-13 00:12:57.545712 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-13 00:12:57.608863 | orchestrator | ok: [testbed-manager] 2026-04-13 00:12:57.608915 | orchestrator | 2026-04-13 00:12:57.608925 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-13 00:12:57.653450 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-04-13 00:12:57.653493 | orchestrator | 2026-04-13 00:12:57.653499 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-13 00:12:58.373081 | orchestrator | ok: [testbed-manager] 2026-04-13 00:12:58.373129 | orchestrator | 2026-04-13 00:12:58.373139 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-13 00:12:58.428393 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:12:58.428575 | orchestrator | 2026-04-13 00:12:58.428584 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-13 00:12:59.827764 | orchestrator | changed: [testbed-manager] 2026-04-13 00:12:59.827890 | orchestrator | 2026-04-13 00:12:59.827902 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-13 00:13:00.422408 | orchestrator | ok: [testbed-manager] 2026-04-13 00:13:00.422472 | orchestrator | 2026-04-13 00:13:00.422483 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-13 00:13:01.601570 | orchestrator | changed: [testbed-manager] 2026-04-13 00:13:01.601628 | orchestrator | 2026-04-13 00:13:01.601639 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-13 00:13:19.467727 | orchestrator | changed: [testbed-manager] 2026-04-13 00:13:19.467786 | orchestrator | 2026-04-13 00:13:19.467795 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-13 00:13:20.170009 | orchestrator | ok: [testbed-manager] 2026-04-13 00:13:20.170121 | orchestrator | 2026-04-13 00:13:20.170137 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-13 00:13:20.256871 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:13:20.256953 | orchestrator | 2026-04-13 00:13:20.256970 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-04-13 00:13:21.250176 | orchestrator | changed: [testbed-manager] 2026-04-13 00:13:21.250270 | orchestrator | 2026-04-13 00:13:21.250287 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-04-13 00:13:22.277524 | orchestrator | changed: [testbed-manager] 2026-04-13 00:13:22.277607 | orchestrator | 2026-04-13 00:13:22.277622 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-04-13 00:13:22.908271 | orchestrator | changed: [testbed-manager] 2026-04-13 00:13:22.908364 | orchestrator | 2026-04-13 00:13:22.908380 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-04-13 00:13:22.959253 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-13 00:13:22.959337 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-13 00:13:22.959346 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-13 00:13:22.959354 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-13 00:13:25.176323 | orchestrator | changed: [testbed-manager] 2026-04-13 00:13:25.176430 | orchestrator | 2026-04-13 00:13:25.176449 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-04-13 00:13:34.435364 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-04-13 00:13:34.435463 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-04-13 00:13:34.435513 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-04-13 00:13:34.435528 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-04-13 00:13:34.435548 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-04-13 00:13:34.435560 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-04-13 00:13:34.435571 | orchestrator | 2026-04-13 00:13:34.435583 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-04-13 00:13:35.509786 | orchestrator | changed: [testbed-manager] 2026-04-13 00:13:35.510059 | orchestrator | 2026-04-13 00:13:35.510087 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-04-13 00:13:38.750882 | orchestrator | changed: [testbed-manager] 2026-04-13 00:13:38.750976 | orchestrator | 2026-04-13 00:13:38.750994 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-04-13 00:13:38.792134 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:13:38.792234 | orchestrator | 2026-04-13 00:13:38.792265 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-04-13 00:15:26.279063 | orchestrator | changed: [testbed-manager] 2026-04-13 00:15:26.279171 | orchestrator | 2026-04-13 00:15:26.279189 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-13 00:15:27.563506 | orchestrator | ok: [testbed-manager] 2026-04-13 00:15:27.563569 | orchestrator | 2026-04-13 00:15:27.563587 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:15:27.563600 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2026-04-13 00:15:27.563612 | orchestrator | 2026-04-13 00:15:27.843189 | orchestrator | ok: Runtime: 0:02:34.146461 2026-04-13 00:15:27.861033 | 2026-04-13 00:15:27.861184 | TASK [Reboot manager] 2026-04-13 00:15:29.398895 | orchestrator | ok: Runtime: 0:00:00.987846 2026-04-13 00:15:29.416648 | 2026-04-13 00:15:29.416818 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-13 00:15:45.837446 | orchestrator | ok 2026-04-13 00:15:45.856158 | 2026-04-13 00:15:45.856351 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-13 00:16:45.905774 | orchestrator | ok 2026-04-13 00:16:45.915352 | 2026-04-13 00:16:45.915500 | TASK [Deploy manager + bootstrap nodes] 2026-04-13 00:16:48.661692 | orchestrator | 2026-04-13 00:16:48.661845 | orchestrator | # DEPLOY MANAGER 2026-04-13 00:16:48.661860 | orchestrator | 2026-04-13 00:16:48.661868 | orchestrator | + set -e 2026-04-13 00:16:48.661876 | orchestrator | + echo 2026-04-13 00:16:48.661885 | orchestrator | + echo '# DEPLOY MANAGER' 2026-04-13 00:16:48.661895 | orchestrator | + echo 2026-04-13 00:16:48.661926 | orchestrator | + cat /opt/manager-vars.sh 2026-04-13 00:16:48.665402 | orchestrator | export NUMBER_OF_NODES=6 2026-04-13 00:16:48.665475 | orchestrator | 2026-04-13 00:16:48.665489 | orchestrator | export CEPH_VERSION=reef 2026-04-13 00:16:48.665503 | orchestrator | export CONFIGURATION_VERSION=main 2026-04-13 00:16:48.665519 | orchestrator | export MANAGER_VERSION=10.0.0 2026-04-13 00:16:48.665546 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-04-13 00:16:48.665555 | orchestrator | 2026-04-13 00:16:48.665569 | orchestrator | export ARA=false 2026-04-13 00:16:48.665577 | orchestrator | export DEPLOY_MODE=manager 2026-04-13 00:16:48.665590 | orchestrator | export TEMPEST=true 2026-04-13 00:16:48.665597 | orchestrator | export IS_ZUUL=true 2026-04-13 00:16:48.665605 | orchestrator | 2026-04-13 00:16:48.665618 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2026-04-13 00:16:48.665626 | orchestrator | export EXTERNAL_API=false 2026-04-13 00:16:48.665633 | orchestrator | 2026-04-13 00:16:48.665640 | orchestrator | export IMAGE_USER=ubuntu 2026-04-13 00:16:48.665651 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-04-13 00:16:48.665659 | orchestrator | 2026-04-13 00:16:48.665666 | orchestrator | export CEPH_STACK=ceph-ansible 2026-04-13 00:16:48.665682 | orchestrator | 2026-04-13 00:16:48.665690 | orchestrator | + echo 2026-04-13 00:16:48.665698 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-13 00:16:48.667150 | orchestrator | ++ export INTERACTIVE=false 2026-04-13 00:16:48.667179 | orchestrator | ++ INTERACTIVE=false 2026-04-13 00:16:48.667190 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-13 00:16:48.667200 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-13 00:16:48.667304 | orchestrator | + source /opt/manager-vars.sh 2026-04-13 00:16:48.667315 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-13 00:16:48.667323 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-13 00:16:48.667330 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-13 00:16:48.667337 | orchestrator | ++ CEPH_VERSION=reef 2026-04-13 00:16:48.667345 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-13 00:16:48.667425 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-13 00:16:48.667435 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-13 00:16:48.667443 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-13 00:16:48.667450 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-13 00:16:48.667467 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-13 00:16:48.667475 | orchestrator | ++ export ARA=false 2026-04-13 00:16:48.667482 | orchestrator | ++ ARA=false 2026-04-13 00:16:48.667490 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-13 00:16:48.667497 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-13 00:16:48.667508 | orchestrator | ++ export TEMPEST=true 2026-04-13 00:16:48.667515 | orchestrator | ++ TEMPEST=true 2026-04-13 00:16:48.667523 | orchestrator | ++ export IS_ZUUL=true 2026-04-13 00:16:48.667530 | orchestrator | ++ IS_ZUUL=true 2026-04-13 00:16:48.667540 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2026-04-13 00:16:48.667547 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2026-04-13 00:16:48.667557 | orchestrator | ++ export EXTERNAL_API=false 2026-04-13 00:16:48.667564 | orchestrator | ++ EXTERNAL_API=false 2026-04-13 00:16:48.667652 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-13 00:16:48.667664 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-13 00:16:48.667825 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-13 00:16:48.667843 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-13 00:16:48.667855 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-13 00:16:48.667867 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-13 00:16:48.667879 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-04-13 00:16:48.726590 | orchestrator | + docker version 2026-04-13 00:16:48.823397 | orchestrator | Client: Docker Engine - Community 2026-04-13 00:16:48.823492 | orchestrator | Version: 27.5.1 2026-04-13 00:16:48.823508 | orchestrator | API version: 1.47 2026-04-13 00:16:48.823521 | orchestrator | Go version: go1.22.11 2026-04-13 00:16:48.823532 | orchestrator | Git commit: 9f9e405 2026-04-13 00:16:48.823543 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-13 00:16:48.823565 | orchestrator | OS/Arch: linux/amd64 2026-04-13 00:16:48.823578 | orchestrator | Context: default 2026-04-13 00:16:48.823585 | orchestrator | 2026-04-13 00:16:48.823592 | orchestrator | Server: Docker Engine - Community 2026-04-13 00:16:48.823598 | orchestrator | Engine: 2026-04-13 00:16:48.823605 | orchestrator | Version: 27.5.1 2026-04-13 00:16:48.823612 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-04-13 00:16:48.823644 | orchestrator | Go version: go1.22.11 2026-04-13 00:16:48.823650 | orchestrator | Git commit: 4c9b3b0 2026-04-13 00:16:48.823657 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-13 00:16:48.823663 | orchestrator | OS/Arch: linux/amd64 2026-04-13 00:16:48.823677 | orchestrator | Experimental: false 2026-04-13 00:16:48.823684 | orchestrator | containerd: 2026-04-13 00:16:48.823836 | orchestrator | Version: v2.2.2 2026-04-13 00:16:48.823848 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-04-13 00:16:48.823855 | orchestrator | runc: 2026-04-13 00:16:48.823862 | orchestrator | Version: 1.3.4 2026-04-13 00:16:48.823868 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-04-13 00:16:48.823874 | orchestrator | docker-init: 2026-04-13 00:16:48.823881 | orchestrator | Version: 0.19.0 2026-04-13 00:16:48.823888 | orchestrator | GitCommit: de40ad0 2026-04-13 00:16:48.826391 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-04-13 00:16:48.834488 | orchestrator | + set -e 2026-04-13 00:16:48.834532 | orchestrator | + source /opt/manager-vars.sh 2026-04-13 00:16:48.834540 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-13 00:16:48.834584 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-13 00:16:48.834592 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-13 00:16:48.834599 | orchestrator | ++ CEPH_VERSION=reef 2026-04-13 00:16:48.834607 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-13 00:16:48.834615 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-13 00:16:48.834622 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-13 00:16:48.834630 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-13 00:16:48.834637 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-13 00:16:48.834645 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-13 00:16:48.834652 | orchestrator | ++ export ARA=false 2026-04-13 00:16:48.834660 | orchestrator | ++ ARA=false 2026-04-13 00:16:48.834667 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-13 00:16:48.834674 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-13 00:16:48.834681 | orchestrator | ++ export TEMPEST=true 2026-04-13 00:16:48.834688 | orchestrator | ++ TEMPEST=true 2026-04-13 00:16:48.834696 | orchestrator | ++ export IS_ZUUL=true 2026-04-13 00:16:48.834703 | orchestrator | ++ IS_ZUUL=true 2026-04-13 00:16:48.834710 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2026-04-13 00:16:48.834717 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2026-04-13 00:16:48.834724 | orchestrator | ++ export EXTERNAL_API=false 2026-04-13 00:16:48.834731 | orchestrator | ++ EXTERNAL_API=false 2026-04-13 00:16:48.834738 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-13 00:16:48.834745 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-13 00:16:48.834753 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-13 00:16:48.834760 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-13 00:16:48.834767 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-13 00:16:48.834774 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-13 00:16:48.834781 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-13 00:16:48.834788 | orchestrator | ++ export INTERACTIVE=false 2026-04-13 00:16:48.834795 | orchestrator | ++ INTERACTIVE=false 2026-04-13 00:16:48.834803 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-13 00:16:48.834813 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-13 00:16:48.834828 | orchestrator | + [[ 10.0.0 != \l\a\t\e\s\t ]] 2026-04-13 00:16:48.834835 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 10.0.0 2026-04-13 00:16:48.839156 | orchestrator | + set -e 2026-04-13 00:16:48.839223 | orchestrator | + VERSION=10.0.0 2026-04-13 00:16:48.839245 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 10.0.0/g' /opt/configuration/environments/manager/configuration.yml 2026-04-13 00:16:48.848551 | orchestrator | + [[ 10.0.0 != \l\a\t\e\s\t ]] 2026-04-13 00:16:48.848613 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-13 00:16:48.853296 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-13 00:16:48.855665 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-04-13 00:16:48.861494 | orchestrator | /opt/configuration ~ 2026-04-13 00:16:48.861560 | orchestrator | + set -e 2026-04-13 00:16:48.861574 | orchestrator | + pushd /opt/configuration 2026-04-13 00:16:48.861586 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-13 00:16:48.865001 | orchestrator | + source /opt/venv/bin/activate 2026-04-13 00:16:48.866702 | orchestrator | ++ deactivate nondestructive 2026-04-13 00:16:48.866731 | orchestrator | ++ '[' -n '' ']' 2026-04-13 00:16:48.866747 | orchestrator | ++ '[' -n '' ']' 2026-04-13 00:16:48.866787 | orchestrator | ++ hash -r 2026-04-13 00:16:48.866799 | orchestrator | ++ '[' -n '' ']' 2026-04-13 00:16:48.866810 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-13 00:16:48.866821 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-13 00:16:48.866832 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-13 00:16:48.866844 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-13 00:16:48.866855 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-13 00:16:48.866866 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-13 00:16:48.866876 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-13 00:16:48.866888 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-13 00:16:48.866900 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-13 00:16:48.866911 | orchestrator | ++ export PATH 2026-04-13 00:16:48.866922 | orchestrator | ++ '[' -n '' ']' 2026-04-13 00:16:48.866933 | orchestrator | ++ '[' -z '' ']' 2026-04-13 00:16:48.866944 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-13 00:16:48.866955 | orchestrator | ++ PS1='(venv) ' 2026-04-13 00:16:48.866965 | orchestrator | ++ export PS1 2026-04-13 00:16:48.866976 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-13 00:16:48.866987 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-13 00:16:48.866998 | orchestrator | ++ hash -r 2026-04-13 00:16:48.867009 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-04-13 00:16:50.107993 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-04-13 00:16:50.109494 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.1) 2026-04-13 00:16:50.111429 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-04-13 00:16:50.113278 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-04-13 00:16:50.114515 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-04-13 00:16:50.125793 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.2) 2026-04-13 00:16:50.127307 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-04-13 00:16:50.128043 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-04-13 00:16:50.129287 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-04-13 00:16:50.163254 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.7) 2026-04-13 00:16:50.164512 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-04-13 00:16:50.166341 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-04-13 00:16:50.167456 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-04-13 00:16:50.171527 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-04-13 00:16:50.397971 | orchestrator | ++ which gilt 2026-04-13 00:16:50.401219 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-04-13 00:16:50.401309 | orchestrator | + /opt/venv/bin/gilt overlay 2026-04-13 00:16:50.692668 | orchestrator | osism.cfg-generics: 2026-04-13 00:16:50.853893 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-04-13 00:16:50.854097 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-04-13 00:16:50.854148 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-04-13 00:16:50.854165 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-04-13 00:16:51.661760 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-04-13 00:16:51.670384 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-04-13 00:16:52.039184 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-04-13 00:16:52.087594 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-13 00:16:52.087727 | orchestrator | + deactivate 2026-04-13 00:16:52.087753 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-13 00:16:52.087775 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-13 00:16:52.087794 | orchestrator | + export PATH 2026-04-13 00:16:52.088206 | orchestrator | ~ 2026-04-13 00:16:52.088238 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-13 00:16:52.088256 | orchestrator | + '[' -n '' ']' 2026-04-13 00:16:52.088277 | orchestrator | + hash -r 2026-04-13 00:16:52.088295 | orchestrator | + '[' -n '' ']' 2026-04-13 00:16:52.088311 | orchestrator | + unset VIRTUAL_ENV 2026-04-13 00:16:52.088329 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-13 00:16:52.088346 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-13 00:16:52.088363 | orchestrator | + unset -f deactivate 2026-04-13 00:16:52.088381 | orchestrator | + popd 2026-04-13 00:16:52.089733 | orchestrator | + [[ 10.0.0 == \l\a\t\e\s\t ]] 2026-04-13 00:16:52.089827 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-04-13 00:16:52.091468 | orchestrator | ++ semver 10.0.0 7.0.0 2026-04-13 00:16:52.154681 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-13 00:16:52.154788 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-04-13 00:16:52.155569 | orchestrator | ++ semver 10.0.0 10.0.0-0 2026-04-13 00:16:52.240509 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-13 00:16:52.240617 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-04-13 00:16:52.247476 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-04-13 00:16:52.253305 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-04-13 00:16:52.346830 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-13 00:16:52.346938 | orchestrator | + source /opt/venv/bin/activate 2026-04-13 00:16:52.346953 | orchestrator | ++ deactivate nondestructive 2026-04-13 00:16:52.346965 | orchestrator | ++ '[' -n '' ']' 2026-04-13 00:16:52.346976 | orchestrator | ++ '[' -n '' ']' 2026-04-13 00:16:52.346987 | orchestrator | ++ hash -r 2026-04-13 00:16:52.346998 | orchestrator | ++ '[' -n '' ']' 2026-04-13 00:16:52.347009 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-13 00:16:52.347019 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-13 00:16:52.347043 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-13 00:16:52.347056 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-13 00:16:52.347067 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-13 00:16:52.347078 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-13 00:16:52.347174 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-13 00:16:52.347204 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-13 00:16:52.347711 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-13 00:16:52.347747 | orchestrator | ++ export PATH 2026-04-13 00:16:52.347768 | orchestrator | ++ '[' -n '' ']' 2026-04-13 00:16:52.347787 | orchestrator | ++ '[' -z '' ']' 2026-04-13 00:16:52.347806 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-13 00:16:52.347827 | orchestrator | ++ PS1='(venv) ' 2026-04-13 00:16:52.347846 | orchestrator | ++ export PS1 2026-04-13 00:16:52.347864 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-13 00:16:52.347883 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-13 00:16:52.347902 | orchestrator | ++ hash -r 2026-04-13 00:16:52.347921 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-04-13 00:16:53.521719 | orchestrator | 2026-04-13 00:16:53.521831 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-04-13 00:16:53.521847 | orchestrator | 2026-04-13 00:16:53.521860 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-13 00:16:54.093297 | orchestrator | ok: [testbed-manager] 2026-04-13 00:16:54.093410 | orchestrator | 2026-04-13 00:16:54.093426 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-13 00:16:55.079560 | orchestrator | changed: [testbed-manager] 2026-04-13 00:16:55.079641 | orchestrator | 2026-04-13 00:16:55.079655 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-04-13 00:16:55.079667 | orchestrator | 2026-04-13 00:16:55.079677 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-13 00:16:57.428417 | orchestrator | ok: [testbed-manager] 2026-04-13 00:16:57.428512 | orchestrator | 2026-04-13 00:16:57.428529 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-04-13 00:16:57.473992 | orchestrator | ok: [testbed-manager] 2026-04-13 00:16:57.474178 | orchestrator | 2026-04-13 00:16:57.474198 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-04-13 00:16:57.986839 | orchestrator | changed: [testbed-manager] 2026-04-13 00:16:57.986938 | orchestrator | 2026-04-13 00:16:57.986955 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-04-13 00:16:58.037708 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:16:58.037798 | orchestrator | 2026-04-13 00:16:58.037812 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-13 00:16:58.378247 | orchestrator | changed: [testbed-manager] 2026-04-13 00:16:58.378348 | orchestrator | 2026-04-13 00:16:58.378363 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-04-13 00:16:58.700698 | orchestrator | ok: [testbed-manager] 2026-04-13 00:16:58.700791 | orchestrator | 2026-04-13 00:16:58.700805 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-04-13 00:16:58.832687 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:16:58.832812 | orchestrator | 2026-04-13 00:16:58.832837 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-04-13 00:16:58.832859 | orchestrator | 2026-04-13 00:16:58.832877 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-13 00:17:00.623631 | orchestrator | ok: [testbed-manager] 2026-04-13 00:17:00.623749 | orchestrator | 2026-04-13 00:17:00.623774 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-04-13 00:17:00.742674 | orchestrator | included: osism.services.traefik for testbed-manager 2026-04-13 00:17:00.742769 | orchestrator | 2026-04-13 00:17:00.742785 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-04-13 00:17:00.800290 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-04-13 00:17:00.800381 | orchestrator | 2026-04-13 00:17:00.800394 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-04-13 00:17:01.899230 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-04-13 00:17:01.899354 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-04-13 00:17:01.899381 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-04-13 00:17:01.899401 | orchestrator | 2026-04-13 00:17:01.899418 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-04-13 00:17:03.707638 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-04-13 00:17:03.707742 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-04-13 00:17:03.707760 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-04-13 00:17:03.707774 | orchestrator | 2026-04-13 00:17:03.707786 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-04-13 00:17:04.368225 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-13 00:17:04.368314 | orchestrator | changed: [testbed-manager] 2026-04-13 00:17:04.368326 | orchestrator | 2026-04-13 00:17:04.368335 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-04-13 00:17:05.036321 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-13 00:17:05.036405 | orchestrator | changed: [testbed-manager] 2026-04-13 00:17:05.036417 | orchestrator | 2026-04-13 00:17:05.036426 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-04-13 00:17:05.095942 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:17:05.096047 | orchestrator | 2026-04-13 00:17:05.096074 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-04-13 00:17:05.469405 | orchestrator | ok: [testbed-manager] 2026-04-13 00:17:05.469500 | orchestrator | 2026-04-13 00:17:05.469519 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-04-13 00:17:05.536132 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-04-13 00:17:05.536211 | orchestrator | 2026-04-13 00:17:05.536221 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-04-13 00:17:06.681561 | orchestrator | changed: [testbed-manager] 2026-04-13 00:17:06.681683 | orchestrator | 2026-04-13 00:17:06.681711 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-04-13 00:17:07.608290 | orchestrator | changed: [testbed-manager] 2026-04-13 00:17:07.608412 | orchestrator | 2026-04-13 00:17:07.608465 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-04-13 00:17:21.325472 | orchestrator | changed: [testbed-manager] 2026-04-13 00:17:21.325559 | orchestrator | 2026-04-13 00:17:21.325572 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-04-13 00:17:21.375839 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:17:21.375955 | orchestrator | 2026-04-13 00:17:21.375978 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-04-13 00:17:21.375998 | orchestrator | 2026-04-13 00:17:21.376016 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-13 00:17:23.313501 | orchestrator | ok: [testbed-manager] 2026-04-13 00:17:23.313609 | orchestrator | 2026-04-13 00:17:23.313625 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-04-13 00:17:23.439460 | orchestrator | included: osism.services.manager for testbed-manager 2026-04-13 00:17:23.439526 | orchestrator | 2026-04-13 00:17:23.439533 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-04-13 00:17:23.515016 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-04-13 00:17:23.515162 | orchestrator | 2026-04-13 00:17:23.515180 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-04-13 00:17:26.165825 | orchestrator | ok: [testbed-manager] 2026-04-13 00:17:26.165939 | orchestrator | 2026-04-13 00:17:26.165955 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-04-13 00:17:26.219462 | orchestrator | ok: [testbed-manager] 2026-04-13 00:17:26.219546 | orchestrator | 2026-04-13 00:17:26.219559 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-04-13 00:17:26.345807 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-04-13 00:17:26.345892 | orchestrator | 2026-04-13 00:17:26.345904 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-04-13 00:17:29.320646 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-04-13 00:17:29.320767 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-04-13 00:17:29.320783 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-04-13 00:17:29.320796 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-04-13 00:17:29.320808 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-04-13 00:17:29.320819 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-04-13 00:17:29.320832 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-04-13 00:17:29.320844 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-04-13 00:17:29.320854 | orchestrator | 2026-04-13 00:17:29.320867 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-04-13 00:17:30.007656 | orchestrator | changed: [testbed-manager] 2026-04-13 00:17:30.007723 | orchestrator | 2026-04-13 00:17:30.007729 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-04-13 00:17:30.674833 | orchestrator | changed: [testbed-manager] 2026-04-13 00:17:30.674954 | orchestrator | 2026-04-13 00:17:30.674982 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-04-13 00:17:30.760791 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-04-13 00:17:30.760911 | orchestrator | 2026-04-13 00:17:30.760927 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-04-13 00:17:31.989239 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-04-13 00:17:31.989332 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-04-13 00:17:31.989342 | orchestrator | 2026-04-13 00:17:31.989352 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-04-13 00:17:32.647977 | orchestrator | changed: [testbed-manager] 2026-04-13 00:17:32.648048 | orchestrator | 2026-04-13 00:17:32.648057 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-04-13 00:17:32.701569 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:17:32.701658 | orchestrator | 2026-04-13 00:17:32.701670 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-04-13 00:17:32.785753 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-04-13 00:17:32.785850 | orchestrator | 2026-04-13 00:17:32.785865 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-04-13 00:17:33.429165 | orchestrator | changed: [testbed-manager] 2026-04-13 00:17:33.429263 | orchestrator | 2026-04-13 00:17:33.429278 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-04-13 00:17:33.497103 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-04-13 00:17:33.497206 | orchestrator | 2026-04-13 00:17:33.497221 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-04-13 00:17:34.929792 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-13 00:17:34.929938 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-13 00:17:34.929948 | orchestrator | changed: [testbed-manager] 2026-04-13 00:17:34.929955 | orchestrator | 2026-04-13 00:17:34.929961 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-04-13 00:17:35.601369 | orchestrator | changed: [testbed-manager] 2026-04-13 00:17:35.601496 | orchestrator | 2026-04-13 00:17:35.601527 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-04-13 00:17:35.656475 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:17:35.656594 | orchestrator | 2026-04-13 00:17:35.656620 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-04-13 00:17:35.750117 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-04-13 00:17:35.750200 | orchestrator | 2026-04-13 00:17:35.750210 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-04-13 00:17:36.334478 | orchestrator | changed: [testbed-manager] 2026-04-13 00:17:36.334581 | orchestrator | 2026-04-13 00:17:36.334597 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-04-13 00:17:36.754576 | orchestrator | changed: [testbed-manager] 2026-04-13 00:17:36.754659 | orchestrator | 2026-04-13 00:17:36.754669 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-04-13 00:17:38.104109 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-04-13 00:17:38.104184 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-04-13 00:17:38.104190 | orchestrator | 2026-04-13 00:17:38.104195 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-04-13 00:17:38.780779 | orchestrator | changed: [testbed-manager] 2026-04-13 00:17:38.780890 | orchestrator | 2026-04-13 00:17:38.780915 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-04-13 00:17:39.171232 | orchestrator | ok: [testbed-manager] 2026-04-13 00:17:39.171361 | orchestrator | 2026-04-13 00:17:39.171379 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-04-13 00:17:39.552108 | orchestrator | changed: [testbed-manager] 2026-04-13 00:17:39.552206 | orchestrator | 2026-04-13 00:17:39.552219 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-04-13 00:17:39.609946 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:17:39.610198 | orchestrator | 2026-04-13 00:17:39.610249 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-04-13 00:17:39.676638 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-04-13 00:17:39.676735 | orchestrator | 2026-04-13 00:17:39.676751 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-04-13 00:17:39.734494 | orchestrator | ok: [testbed-manager] 2026-04-13 00:17:39.734604 | orchestrator | 2026-04-13 00:17:39.734627 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-04-13 00:17:41.801393 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-04-13 00:17:41.801488 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-04-13 00:17:41.801499 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-04-13 00:17:41.801504 | orchestrator | 2026-04-13 00:17:41.801509 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-04-13 00:17:42.527686 | orchestrator | changed: [testbed-manager] 2026-04-13 00:17:42.527791 | orchestrator | 2026-04-13 00:17:42.527808 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-04-13 00:17:43.275617 | orchestrator | changed: [testbed-manager] 2026-04-13 00:17:43.275764 | orchestrator | 2026-04-13 00:17:43.275780 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-04-13 00:17:44.005149 | orchestrator | changed: [testbed-manager] 2026-04-13 00:17:44.005237 | orchestrator | 2026-04-13 00:17:44.005246 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-04-13 00:17:44.080659 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-04-13 00:17:44.080750 | orchestrator | 2026-04-13 00:17:44.080765 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-04-13 00:17:44.138137 | orchestrator | ok: [testbed-manager] 2026-04-13 00:17:44.138221 | orchestrator | 2026-04-13 00:17:44.138230 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-04-13 00:17:44.888000 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-04-13 00:17:44.888160 | orchestrator | 2026-04-13 00:17:44.888176 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-04-13 00:17:44.969024 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-04-13 00:17:44.969141 | orchestrator | 2026-04-13 00:17:44.969151 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-04-13 00:17:45.672482 | orchestrator | changed: [testbed-manager] 2026-04-13 00:17:45.672581 | orchestrator | 2026-04-13 00:17:45.672598 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-04-13 00:17:46.332426 | orchestrator | ok: [testbed-manager] 2026-04-13 00:17:46.332519 | orchestrator | 2026-04-13 00:17:46.332536 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-04-13 00:17:46.392095 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:17:46.392169 | orchestrator | 2026-04-13 00:17:46.392176 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-04-13 00:17:46.456252 | orchestrator | ok: [testbed-manager] 2026-04-13 00:17:46.456335 | orchestrator | 2026-04-13 00:17:46.456344 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-04-13 00:17:47.314422 | orchestrator | changed: [testbed-manager] 2026-04-13 00:17:47.314579 | orchestrator | 2026-04-13 00:17:47.314609 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-04-13 00:19:05.313114 | orchestrator | changed: [testbed-manager] 2026-04-13 00:19:05.313231 | orchestrator | 2026-04-13 00:19:05.313281 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-04-13 00:19:06.402577 | orchestrator | ok: [testbed-manager] 2026-04-13 00:19:06.402680 | orchestrator | 2026-04-13 00:19:06.402696 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-04-13 00:19:06.463054 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:19:06.463151 | orchestrator | 2026-04-13 00:19:06.463167 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-04-13 00:19:09.052684 | orchestrator | changed: [testbed-manager] 2026-04-13 00:19:09.052768 | orchestrator | 2026-04-13 00:19:09.052781 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-04-13 00:19:09.169808 | orchestrator | ok: [testbed-manager] 2026-04-13 00:19:09.169907 | orchestrator | 2026-04-13 00:19:09.169923 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-13 00:19:09.169935 | orchestrator | 2026-04-13 00:19:09.169947 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-04-13 00:19:09.235914 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:19:09.236048 | orchestrator | 2026-04-13 00:19:09.236065 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-04-13 00:20:09.288331 | orchestrator | Pausing for 60 seconds 2026-04-13 00:20:09.288478 | orchestrator | changed: [testbed-manager] 2026-04-13 00:20:09.288499 | orchestrator | 2026-04-13 00:20:09.288511 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-04-13 00:20:12.558002 | orchestrator | changed: [testbed-manager] 2026-04-13 00:20:12.558143 | orchestrator | 2026-04-13 00:20:12.558178 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-04-13 00:21:14.666620 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-04-13 00:21:14.666695 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-04-13 00:21:14.666701 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-04-13 00:21:14.666706 | orchestrator | changed: [testbed-manager] 2026-04-13 00:21:14.666711 | orchestrator | 2026-04-13 00:21:14.666716 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-04-13 00:21:20.384895 | orchestrator | changed: [testbed-manager] 2026-04-13 00:21:20.385010 | orchestrator | 2026-04-13 00:21:20.385028 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-04-13 00:21:20.471830 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-04-13 00:21:20.471922 | orchestrator | 2026-04-13 00:21:20.471936 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-13 00:21:20.471949 | orchestrator | 2026-04-13 00:21:20.471961 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-04-13 00:21:20.522624 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:21:20.522719 | orchestrator | 2026-04-13 00:21:20.522739 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-04-13 00:21:20.602444 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-04-13 00:21:20.602536 | orchestrator | 2026-04-13 00:21:20.602568 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-04-13 00:21:21.378354 | orchestrator | changed: [testbed-manager] 2026-04-13 00:21:21.378448 | orchestrator | 2026-04-13 00:21:21.378464 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-04-13 00:21:24.855007 | orchestrator | ok: [testbed-manager] 2026-04-13 00:21:24.855102 | orchestrator | 2026-04-13 00:21:24.855118 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-04-13 00:21:24.934763 | orchestrator | ok: [testbed-manager] => { 2026-04-13 00:21:24.934919 | orchestrator | "version_check_result.stdout_lines": [ 2026-04-13 00:21:24.934936 | orchestrator | "=== OSISM Container Version Check ===", 2026-04-13 00:21:24.934948 | orchestrator | "Checking running containers against expected versions...", 2026-04-13 00:21:24.934960 | orchestrator | "", 2026-04-13 00:21:24.934972 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-04-13 00:21:24.934984 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20260322.0", 2026-04-13 00:21:24.934996 | orchestrator | " Enabled: true", 2026-04-13 00:21:24.935007 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20260322.0", 2026-04-13 00:21:24.935018 | orchestrator | " Status: ✅ MATCH", 2026-04-13 00:21:24.935056 | orchestrator | "", 2026-04-13 00:21:24.935069 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-04-13 00:21:24.935080 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20260322.0", 2026-04-13 00:21:24.935091 | orchestrator | " Enabled: true", 2026-04-13 00:21:24.935101 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20260322.0", 2026-04-13 00:21:24.935112 | orchestrator | " Status: ✅ MATCH", 2026-04-13 00:21:24.935123 | orchestrator | "", 2026-04-13 00:21:24.935134 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-04-13 00:21:24.935145 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20260322.0", 2026-04-13 00:21:24.935155 | orchestrator | " Enabled: true", 2026-04-13 00:21:24.935166 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20260322.0", 2026-04-13 00:21:24.935176 | orchestrator | " Status: ✅ MATCH", 2026-04-13 00:21:24.935187 | orchestrator | "", 2026-04-13 00:21:24.935200 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-04-13 00:21:24.935211 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20260322.0", 2026-04-13 00:21:24.935222 | orchestrator | " Enabled: true", 2026-04-13 00:21:24.935232 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20260322.0", 2026-04-13 00:21:24.935243 | orchestrator | " Status: ✅ MATCH", 2026-04-13 00:21:24.935254 | orchestrator | "", 2026-04-13 00:21:24.935265 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-04-13 00:21:24.935275 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20260328.0", 2026-04-13 00:21:24.935286 | orchestrator | " Enabled: true", 2026-04-13 00:21:24.935297 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20260328.0", 2026-04-13 00:21:24.935310 | orchestrator | " Status: ✅ MATCH", 2026-04-13 00:21:24.935322 | orchestrator | "", 2026-04-13 00:21:24.935335 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-04-13 00:21:24.935348 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-13 00:21:24.935360 | orchestrator | " Enabled: true", 2026-04-13 00:21:24.935372 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-13 00:21:24.935384 | orchestrator | " Status: ✅ MATCH", 2026-04-13 00:21:24.935396 | orchestrator | "", 2026-04-13 00:21:24.935408 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-04-13 00:21:24.935421 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-13 00:21:24.935434 | orchestrator | " Enabled: true", 2026-04-13 00:21:24.935446 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-13 00:21:24.935459 | orchestrator | " Status: ✅ MATCH", 2026-04-13 00:21:24.935471 | orchestrator | "", 2026-04-13 00:21:24.935483 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-04-13 00:21:24.935495 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-13 00:21:24.935507 | orchestrator | " Enabled: true", 2026-04-13 00:21:24.935519 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-13 00:21:24.935532 | orchestrator | " Status: ✅ MATCH", 2026-04-13 00:21:24.935544 | orchestrator | "", 2026-04-13 00:21:24.935557 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-04-13 00:21:24.935569 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20260320.0", 2026-04-13 00:21:24.935581 | orchestrator | " Enabled: true", 2026-04-13 00:21:24.935593 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20260320.0", 2026-04-13 00:21:24.935606 | orchestrator | " Status: ✅ MATCH", 2026-04-13 00:21:24.935618 | orchestrator | "", 2026-04-13 00:21:24.935631 | orchestrator | "Checking service: redis (Redis Cache)", 2026-04-13 00:21:24.935643 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-13 00:21:24.935656 | orchestrator | " Enabled: true", 2026-04-13 00:21:24.935674 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-13 00:21:24.935692 | orchestrator | " Status: ✅ MATCH", 2026-04-13 00:21:24.935714 | orchestrator | "", 2026-04-13 00:21:24.935731 | orchestrator | "Checking service: api (OSISM API Service)", 2026-04-13 00:21:24.935749 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-13 00:21:24.935768 | orchestrator | " Enabled: true", 2026-04-13 00:21:24.935812 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-13 00:21:24.935832 | orchestrator | " Status: ✅ MATCH", 2026-04-13 00:21:24.935845 | orchestrator | "", 2026-04-13 00:21:24.935856 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-04-13 00:21:24.935867 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-13 00:21:24.935877 | orchestrator | " Enabled: true", 2026-04-13 00:21:24.935887 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-13 00:21:24.935898 | orchestrator | " Status: ✅ MATCH", 2026-04-13 00:21:24.935909 | orchestrator | "", 2026-04-13 00:21:24.935920 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-04-13 00:21:24.935931 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-13 00:21:24.935942 | orchestrator | " Enabled: true", 2026-04-13 00:21:24.935952 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-13 00:21:24.935963 | orchestrator | " Status: ✅ MATCH", 2026-04-13 00:21:24.935973 | orchestrator | "", 2026-04-13 00:21:24.935984 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-04-13 00:21:24.936003 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-13 00:21:24.936014 | orchestrator | " Enabled: true", 2026-04-13 00:21:24.936025 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-13 00:21:24.936057 | orchestrator | " Status: ✅ MATCH", 2026-04-13 00:21:24.936068 | orchestrator | "", 2026-04-13 00:21:24.936079 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-04-13 00:21:24.936090 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-13 00:21:24.936101 | orchestrator | " Enabled: true", 2026-04-13 00:21:24.936111 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-13 00:21:24.936122 | orchestrator | " Status: ✅ MATCH", 2026-04-13 00:21:24.936133 | orchestrator | "", 2026-04-13 00:21:24.936144 | orchestrator | "=== Summary ===", 2026-04-13 00:21:24.936154 | orchestrator | "Errors (version mismatches): 0", 2026-04-13 00:21:24.936165 | orchestrator | "Warnings (expected containers not running): 0", 2026-04-13 00:21:24.936176 | orchestrator | "", 2026-04-13 00:21:24.936187 | orchestrator | "✅ All running containers match expected versions!" 2026-04-13 00:21:24.936198 | orchestrator | ] 2026-04-13 00:21:24.936209 | orchestrator | } 2026-04-13 00:21:24.936220 | orchestrator | 2026-04-13 00:21:24.936232 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-04-13 00:21:24.994006 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:21:24.994170 | orchestrator | 2026-04-13 00:21:24.994188 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:21:24.994202 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-04-13 00:21:24.994213 | orchestrator | 2026-04-13 00:21:25.113753 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-13 00:21:25.113943 | orchestrator | + deactivate 2026-04-13 00:21:25.113967 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-13 00:21:25.113981 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-13 00:21:25.113992 | orchestrator | + export PATH 2026-04-13 00:21:25.114003 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-13 00:21:25.114058 | orchestrator | + '[' -n '' ']' 2026-04-13 00:21:25.114073 | orchestrator | + hash -r 2026-04-13 00:21:25.114084 | orchestrator | + '[' -n '' ']' 2026-04-13 00:21:25.114095 | orchestrator | + unset VIRTUAL_ENV 2026-04-13 00:21:25.114109 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-13 00:21:25.114120 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-13 00:21:25.114226 | orchestrator | + unset -f deactivate 2026-04-13 00:21:25.114241 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-04-13 00:21:25.122529 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-13 00:21:25.122596 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-13 00:21:25.122608 | orchestrator | + local max_attempts=60 2026-04-13 00:21:25.122619 | orchestrator | + local name=ceph-ansible 2026-04-13 00:21:25.122629 | orchestrator | + local attempt_num=1 2026-04-13 00:21:25.123044 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-13 00:21:25.156222 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-13 00:21:25.156309 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-13 00:21:25.156324 | orchestrator | + local max_attempts=60 2026-04-13 00:21:25.156336 | orchestrator | + local name=kolla-ansible 2026-04-13 00:21:25.156348 | orchestrator | + local attempt_num=1 2026-04-13 00:21:25.157219 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-13 00:21:25.198349 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-13 00:21:25.198419 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-13 00:21:25.198427 | orchestrator | + local max_attempts=60 2026-04-13 00:21:25.198433 | orchestrator | + local name=osism-ansible 2026-04-13 00:21:25.198439 | orchestrator | + local attempt_num=1 2026-04-13 00:21:25.198978 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-13 00:21:25.236537 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-13 00:21:25.236611 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-13 00:21:25.236620 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-13 00:21:25.959233 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-04-13 00:21:26.139868 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-04-13 00:21:26.139970 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20260322.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-04-13 00:21:26.139989 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20260328.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-04-13 00:21:26.140001 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-04-13 00:21:26.140035 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up About a minute (healthy) 8000/tcp 2026-04-13 00:21:26.140047 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-04-13 00:21:26.140058 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-04-13 00:21:26.140069 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20260322.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-04-13 00:21:26.140080 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-04-13 00:21:26.140091 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-04-13 00:21:26.140102 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-04-13 00:21:26.140113 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-04-13 00:21:26.140145 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20260322.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-04-13 00:21:26.140157 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20260320.0 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-04-13 00:21:26.140168 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20260322.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-04-13 00:21:26.140180 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-04-13 00:21:26.145718 | orchestrator | ++ semver 10.0.0 7.0.0 2026-04-13 00:21:26.207227 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-13 00:21:26.207452 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-04-13 00:21:26.211819 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-04-13 00:21:38.806146 | orchestrator | 2026-04-13 00:21:38 | INFO  | Prepare task for execution of resolvconf. 2026-04-13 00:21:39.047947 | orchestrator | 2026-04-13 00:21:39 | INFO  | Task 7cf0de06-7963-41d3-9079-bb033937bb85 (resolvconf) was prepared for execution. 2026-04-13 00:21:39.048030 | orchestrator | 2026-04-13 00:21:39 | INFO  | It takes a moment until task 7cf0de06-7963-41d3-9079-bb033937bb85 (resolvconf) has been started and output is visible here. 2026-04-13 00:21:53.271882 | orchestrator | 2026-04-13 00:21:53.271987 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-04-13 00:21:53.272002 | orchestrator | 2026-04-13 00:21:53.272013 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-13 00:21:53.272024 | orchestrator | Monday 13 April 2026 00:21:42 +0000 (0:00:00.180) 0:00:00.180 ********** 2026-04-13 00:21:53.272034 | orchestrator | ok: [testbed-manager] 2026-04-13 00:21:53.272045 | orchestrator | 2026-04-13 00:21:53.272055 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-13 00:21:53.272066 | orchestrator | Monday 13 April 2026 00:21:46 +0000 (0:00:04.309) 0:00:04.489 ********** 2026-04-13 00:21:53.272076 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:21:53.272087 | orchestrator | 2026-04-13 00:21:53.272097 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-13 00:21:53.272107 | orchestrator | Monday 13 April 2026 00:21:46 +0000 (0:00:00.075) 0:00:04.565 ********** 2026-04-13 00:21:53.272135 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-04-13 00:21:53.272147 | orchestrator | 2026-04-13 00:21:53.272157 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-13 00:21:53.272167 | orchestrator | Monday 13 April 2026 00:21:46 +0000 (0:00:00.083) 0:00:04.649 ********** 2026-04-13 00:21:53.272177 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-04-13 00:21:53.272187 | orchestrator | 2026-04-13 00:21:53.272197 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-13 00:21:53.272207 | orchestrator | Monday 13 April 2026 00:21:46 +0000 (0:00:00.094) 0:00:04.743 ********** 2026-04-13 00:21:53.272216 | orchestrator | ok: [testbed-manager] 2026-04-13 00:21:53.272226 | orchestrator | 2026-04-13 00:21:53.272236 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-13 00:21:53.272245 | orchestrator | Monday 13 April 2026 00:21:48 +0000 (0:00:01.243) 0:00:05.986 ********** 2026-04-13 00:21:53.272277 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:21:53.272287 | orchestrator | 2026-04-13 00:21:53.272297 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-13 00:21:53.272307 | orchestrator | Monday 13 April 2026 00:21:48 +0000 (0:00:00.058) 0:00:06.044 ********** 2026-04-13 00:21:53.272316 | orchestrator | ok: [testbed-manager] 2026-04-13 00:21:53.272325 | orchestrator | 2026-04-13 00:21:53.272335 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-13 00:21:53.272345 | orchestrator | Monday 13 April 2026 00:21:48 +0000 (0:00:00.599) 0:00:06.644 ********** 2026-04-13 00:21:53.272354 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:21:53.272364 | orchestrator | 2026-04-13 00:21:53.272374 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-13 00:21:53.272388 | orchestrator | Monday 13 April 2026 00:21:48 +0000 (0:00:00.072) 0:00:06.717 ********** 2026-04-13 00:21:53.272399 | orchestrator | changed: [testbed-manager] 2026-04-13 00:21:53.272410 | orchestrator | 2026-04-13 00:21:53.272422 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-13 00:21:53.272433 | orchestrator | Monday 13 April 2026 00:21:49 +0000 (0:00:00.615) 0:00:07.333 ********** 2026-04-13 00:21:53.272444 | orchestrator | changed: [testbed-manager] 2026-04-13 00:21:53.272455 | orchestrator | 2026-04-13 00:21:53.272466 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-13 00:21:53.272477 | orchestrator | Monday 13 April 2026 00:21:50 +0000 (0:00:01.190) 0:00:08.524 ********** 2026-04-13 00:21:53.272488 | orchestrator | ok: [testbed-manager] 2026-04-13 00:21:53.272499 | orchestrator | 2026-04-13 00:21:53.272509 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-13 00:21:53.272521 | orchestrator | Monday 13 April 2026 00:21:51 +0000 (0:00:01.057) 0:00:09.582 ********** 2026-04-13 00:21:53.272533 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-04-13 00:21:53.272544 | orchestrator | 2026-04-13 00:21:53.272555 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-13 00:21:53.272566 | orchestrator | Monday 13 April 2026 00:21:51 +0000 (0:00:00.085) 0:00:09.667 ********** 2026-04-13 00:21:53.272577 | orchestrator | changed: [testbed-manager] 2026-04-13 00:21:53.272588 | orchestrator | 2026-04-13 00:21:53.272600 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:21:53.272610 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-13 00:21:53.272620 | orchestrator | 2026-04-13 00:21:53.272629 | orchestrator | 2026-04-13 00:21:53.272639 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:21:53.272648 | orchestrator | Monday 13 April 2026 00:21:53 +0000 (0:00:01.222) 0:00:10.889 ********** 2026-04-13 00:21:53.272658 | orchestrator | =============================================================================== 2026-04-13 00:21:53.272667 | orchestrator | Gathering Facts --------------------------------------------------------- 4.31s 2026-04-13 00:21:53.272677 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.24s 2026-04-13 00:21:53.272686 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.22s 2026-04-13 00:21:53.272696 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.19s 2026-04-13 00:21:53.272705 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.06s 2026-04-13 00:21:53.272715 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.62s 2026-04-13 00:21:53.272767 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.60s 2026-04-13 00:21:53.272778 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2026-04-13 00:21:53.272788 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2026-04-13 00:21:53.272805 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-04-13 00:21:53.272815 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.08s 2026-04-13 00:21:53.272824 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2026-04-13 00:21:53.272834 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-04-13 00:21:53.476172 | orchestrator | + osism apply sshconfig 2026-04-13 00:22:04.805539 | orchestrator | 2026-04-13 00:22:04 | INFO  | Prepare task for execution of sshconfig. 2026-04-13 00:22:04.885020 | orchestrator | 2026-04-13 00:22:04 | INFO  | Task 2dd42ba5-a899-4537-a971-91dba3a53b08 (sshconfig) was prepared for execution. 2026-04-13 00:22:04.885113 | orchestrator | 2026-04-13 00:22:04 | INFO  | It takes a moment until task 2dd42ba5-a899-4537-a971-91dba3a53b08 (sshconfig) has been started and output is visible here. 2026-04-13 00:22:16.447354 | orchestrator | 2026-04-13 00:22:16.447462 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-04-13 00:22:16.447479 | orchestrator | 2026-04-13 00:22:16.447491 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-04-13 00:22:16.447504 | orchestrator | Monday 13 April 2026 00:22:08 +0000 (0:00:00.195) 0:00:00.195 ********** 2026-04-13 00:22:16.447516 | orchestrator | ok: [testbed-manager] 2026-04-13 00:22:16.447529 | orchestrator | 2026-04-13 00:22:16.447541 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-04-13 00:22:16.447553 | orchestrator | Monday 13 April 2026 00:22:09 +0000 (0:00:00.962) 0:00:01.158 ********** 2026-04-13 00:22:16.447564 | orchestrator | changed: [testbed-manager] 2026-04-13 00:22:16.447577 | orchestrator | 2026-04-13 00:22:16.447588 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-04-13 00:22:16.447600 | orchestrator | Monday 13 April 2026 00:22:09 +0000 (0:00:00.568) 0:00:01.726 ********** 2026-04-13 00:22:16.447611 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-04-13 00:22:16.447623 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-04-13 00:22:16.447634 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-04-13 00:22:16.447646 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-04-13 00:22:16.447657 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-04-13 00:22:16.447669 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-04-13 00:22:16.447680 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-04-13 00:22:16.447691 | orchestrator | 2026-04-13 00:22:16.447750 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-04-13 00:22:16.447761 | orchestrator | Monday 13 April 2026 00:22:15 +0000 (0:00:05.842) 0:00:07.568 ********** 2026-04-13 00:22:16.447772 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:22:16.447783 | orchestrator | 2026-04-13 00:22:16.447794 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-04-13 00:22:16.447805 | orchestrator | Monday 13 April 2026 00:22:15 +0000 (0:00:00.121) 0:00:07.690 ********** 2026-04-13 00:22:16.447816 | orchestrator | changed: [testbed-manager] 2026-04-13 00:22:16.447827 | orchestrator | 2026-04-13 00:22:16.447837 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:22:16.447850 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-13 00:22:16.447862 | orchestrator | 2026-04-13 00:22:16.447873 | orchestrator | 2026-04-13 00:22:16.447884 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:22:16.447895 | orchestrator | Monday 13 April 2026 00:22:16 +0000 (0:00:00.598) 0:00:08.289 ********** 2026-04-13 00:22:16.447909 | orchestrator | =============================================================================== 2026-04-13 00:22:16.447949 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.84s 2026-04-13 00:22:16.447963 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.96s 2026-04-13 00:22:16.447976 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.60s 2026-04-13 00:22:16.447989 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.57s 2026-04-13 00:22:16.448002 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.12s 2026-04-13 00:22:16.635557 | orchestrator | + osism apply known-hosts 2026-04-13 00:22:27.985886 | orchestrator | 2026-04-13 00:22:27 | INFO  | Prepare task for execution of known-hosts. 2026-04-13 00:22:28.061398 | orchestrator | 2026-04-13 00:22:28 | INFO  | Task 5f8154a4-d3c3-451e-9b48-6ce5251e3e65 (known-hosts) was prepared for execution. 2026-04-13 00:22:28.061490 | orchestrator | 2026-04-13 00:22:28 | INFO  | It takes a moment until task 5f8154a4-d3c3-451e-9b48-6ce5251e3e65 (known-hosts) has been started and output is visible here. 2026-04-13 00:22:44.410068 | orchestrator | 2026-04-13 00:22:44.410172 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-04-13 00:22:44.410188 | orchestrator | 2026-04-13 00:22:44.410200 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-04-13 00:22:44.410212 | orchestrator | Monday 13 April 2026 00:22:31 +0000 (0:00:00.200) 0:00:00.200 ********** 2026-04-13 00:22:44.410224 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-13 00:22:44.410235 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-13 00:22:44.410246 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-13 00:22:44.410256 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-13 00:22:44.410266 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-13 00:22:44.410276 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-13 00:22:44.410306 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-13 00:22:44.410317 | orchestrator | 2026-04-13 00:22:44.410328 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-04-13 00:22:44.410340 | orchestrator | Monday 13 April 2026 00:22:38 +0000 (0:00:06.632) 0:00:06.832 ********** 2026-04-13 00:22:44.410352 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-13 00:22:44.410365 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-13 00:22:44.410375 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-13 00:22:44.410394 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-13 00:22:44.410405 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-13 00:22:44.410416 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-13 00:22:44.410427 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-13 00:22:44.410437 | orchestrator | 2026-04-13 00:22:44.410447 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-13 00:22:44.410457 | orchestrator | Monday 13 April 2026 00:22:38 +0000 (0:00:00.173) 0:00:07.006 ********** 2026-04-13 00:22:44.410491 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK40STEL16tcq1CkzlrJWf03aF31OJB0Vann42reYOjv47ti+cSdrNO16XeSfFoFossUzZJChD9affrPxZYKndA=) 2026-04-13 00:22:44.410515 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7tUdZlcC02me+uXbzA1V72iGHRb/XLsbET04079/3vDw4yyPOpF+eMEEqGl0i1TREQxQ0JM1SatMha8z+3YCW4ynWS3zr89Ywq4JI2fjjDgCsoe29RwyOAATwE5TFHdUauaKYp+I8+ixFBvKbmxLjWLZnADCSyWSpcSN+WU6ZLMMggYE8wXiO27Cb3hBrvgoc3z68LwG0Y5rhOfDBhBVhPB0mkgj6R7tKmi0a1seXRMqr8OAPs4fvp9tYKRBrU1UjvItvGqY8s3tDLdF5PBC3W+ftIftNhjN8mD324tbwuhfYyUcsPypCgfuVXa4Y1MM6O7XUKMfT2zTDj4bOv4/ZH8Fh30ahbrq0+yAfxRJZ6cJdM0XVtLp3KKXSNlyO9idDG87fITZIXewqQtokX6d8IFWE3Qtg2RkRxwYdz48m8weivruyxZrfKF2r+SKZSY4bgUuXND+oRbL2michfugFmjp4ZLqKMFAPYuMYrmapyrMRk5t/7RnV/sj42hGCT70=) 2026-04-13 00:22:44.410530 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFr5hTYo9PZJLmSLXBiTwsGnnk8A2I7P35SlU6K4rTIj) 2026-04-13 00:22:44.410543 | orchestrator | 2026-04-13 00:22:44.410554 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-13 00:22:44.410566 | orchestrator | Monday 13 April 2026 00:22:39 +0000 (0:00:01.320) 0:00:08.326 ********** 2026-04-13 00:22:44.410598 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCcoBS7qG/nkJ+uVjuEol8IQKjiCg56KmkdcPLv7seENaUrE/w74PfsM8b+HF2VTlMgzJEwmB9Do84KppIqr+aXnS3ahsKiVsMFBekoLWFXF16wRz4p/iKyr/S/DLx7GLUzctstahhP6kwaRqXpiPDEVwjKFJGa3pmPlRpZJxNTye2//EEkwA8FyTliO8N0Q2dal146vm25aqagEYC/i5X0M/amDFE9Q10CJ59SkhcdFCRVZUmxaIy8SMjPZuUH9koARZn9LXIX6YoQQ4g38AAsGBDOANJ+5s+pG7u9c5HQ6FpogCRYt/ijlBsXvfF4BjYrXhtRx8u9m38hLY763AWYRCuJeATgq4myb7tPSfNd3B85OB7G4nXwuCDU6ApJ6jMIG5o/zUYe3HO8b1fVbYCPYSbJbzYJksISFFsXR6umjl89bA7/vhrO/TUgJVJBAghAwobjPcmcWIq/HnqduWCIn0GmWg02O2d0p9KwUWC7udms3QpaSThQpW5VyjQ0HBs=) 2026-04-13 00:22:44.410612 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHNHfw08S1VCv+yfJrfvxxXIU74r1ZV0wkO6KfHO2Mbv) 2026-04-13 00:22:44.410622 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDRwqtLI7woEqwVMa9GxhLoVfI4RBBCDKg+ldhQDj6v3iwZRv+/11IP8GS71HUp9wIET+GgtexuFNoJ8HREB+Kk=) 2026-04-13 00:22:44.410633 | orchestrator | 2026-04-13 00:22:44.410645 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-13 00:22:44.410656 | orchestrator | Monday 13 April 2026 00:22:40 +0000 (0:00:01.088) 0:00:09.415 ********** 2026-04-13 00:22:44.410690 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJmK5oYsPnjmdMiqW/s4BQec2r1RrTOwZLkeEemsjoll) 2026-04-13 00:22:44.410702 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4mtw568ESnCH4DehnASVkRrTuXGa8xRjey//zau3DXVj5oAlj4bfAlyKgLCGjrwf8uykFsWOSlqSPH7eNKIIou83PesYdG8zTGYyBdlpc0QZTQgZyr/xAK3ht4S1sNv9fthpkE2hLPr1b70izGioCakEQyKPaFLnfoTo11uHgy6QNohUood/GtTsnUHrAR2IrfxS/iHuyk5RhdwBatYElqeIg4w0+UUjevNbCvi1SyMEmGwokQTspnWD0SkJXx8Y6JmC3HpDXDnBP0TL4mJRqp6SiAdKklQBreMw8zerKPvEcLrMwfvwNZSXroafQfWhfWHKwh6QPWtLjSHvKALljJEOLHhsaIPB+bsklvcyKoADoSzaKeTo11GxPAMhG6oaYJutz5VHnOj6inRjijRdLWnW/KPuDIJOOsC265EdaMWG1fdAqEBQgdfEX+Jje/gYkeCXT/oyLBf9l4E1QFq9S3XkXQoxYQU96OSR5QnBXsPvxSzEGC5O6vd6Ja4A/9g8=) 2026-04-13 00:22:44.410714 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCDuKXWGE2fksVEbXpfODKdQPMQLjhq35Yz9T4+rEwFy2nSi0vHHvwpeQObN1Ma0Aw7mG2BXc9REjABqiLFYFFY=) 2026-04-13 00:22:44.410724 | orchestrator | 2026-04-13 00:22:44.410736 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-13 00:22:44.410746 | orchestrator | Monday 13 April 2026 00:22:41 +0000 (0:00:01.117) 0:00:10.532 ********** 2026-04-13 00:22:44.410757 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMyboGiUBjhj1GsE5+tPPon88BlQ20FOLO2r2SKn7jvKehBa7MHZLSCv0DfMhNAhn3gzk3eoSJzAGNlEwVchbuk=) 2026-04-13 00:22:44.410778 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDm0ZdiFgkcxIkZF8mzp84XEM5knpGsO5rHP6nu3uIINDq8veyNbW41ECPXyJvBGmZ4hIbuZf7HBUFCTuoPruPb/QUkIDqrXsQ09WvwX8OkRNVHCGFw+wU6t3AigJw1y1cbtUo/oxu1tHecemWFSmRxPPaRlKYk9OwXuMOAxKSQ3kIrI8T/+mqrhbUMZbMelIY5NbWsmNkGGP0g69LbCeBq0x+JJwzzfSdkoNuNX3g/vMPxc3DvuEILceqCnFWtHLmEBaOyTG/rXUppKkqo3qQ2cdEiTK1bYLyXp7szmI/7JVs0IGF3NL0ECfHnMsWpIkfDlWgOkWnFUgW4J1xw7/jvt+ed2Bt0TIxPUKa31knKIr3OzgACYt8bLxKUE2lPFREHozf0YIjBJCpFIDnGGhvQgLcZqOipEcjcL4YBAZ7KzyDHvzIjf31npiNgNvufDcnoz5Lv1DRlXI+QrR+msDohrPV3q/LLOTHqwJhczjaIDy+VpY/UKMoETzvMtyt9A8E=) 2026-04-13 00:22:44.410855 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKmQpqAOdeO5x6FxeK5knu0Dvdl/dR3PS6SzBSJWl3MO) 2026-04-13 00:22:44.410871 | orchestrator | 2026-04-13 00:22:44.410882 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-13 00:22:44.410893 | orchestrator | Monday 13 April 2026 00:22:42 +0000 (0:00:01.105) 0:00:11.638 ********** 2026-04-13 00:22:44.410904 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICR+IJtWdY5euzipjImF1KC5YpufPn/be7rrwSL4MN12) 2026-04-13 00:22:44.410915 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+yxN+Py6FSngQ9tzFT4guuralg+1AbLH98H64Cbvc59+slzWhXb9AxFVgpwrLV97SSPmTw+6nBUWGCJmnTNYEMz9wEjR5aAI1nM4dmPBUk1v2FAkOhmFLRadMlV+2AwDLehXoiRyHTrkCszrTJnhJmfm5FODmsuHLcqYjeyV3RaykgQaiKo19rRoXbdtpPfoEmcU5EA+bpOKXdRh/IpAojIkM0yqvz8eeUeyWNUPNHyVbAU0CquuhRQ0rOEhuP1HEKTqUITLiFQPiLREuW4Z8hkIrufjV4KpCgHt2HcuO4AA4wRzAaisSHXGX2EUMxL24zeT26p2Tom9U9endmdoxlcWe1AV6kZD2go4ixqVjHnLQTaGP1VfXbej66e4Phia8Oy9Meb18y7vYJpMMfTj2MRCsdLmpLM4eIZr1fU3Ty1RTNLrIwuw57524/haZj7C9NGnCSnSH9yPb3tBfLB0PBOUImAM8uGvkET27j2pGbcTgCJqyq8lFFRYEhjQ77l8=) 2026-04-13 00:22:44.410926 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOMHxfwXaayn86YxFPoYc+Lon8inya+a8h0owSGf3cSVO9hjfffcrck2YTGogdeelfgba6ep08Tp0DIHwD6/U0g=) 2026-04-13 00:22:44.410936 | orchestrator | 2026-04-13 00:22:44.410947 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-13 00:22:44.410957 | orchestrator | Monday 13 April 2026 00:22:43 +0000 (0:00:01.153) 0:00:12.791 ********** 2026-04-13 00:22:44.410979 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDd6dHsJa5QViccvff2gePcTN2rsgA2exX/+Hu7aC6Hx178jZAJCAQRl/mic//2SH1zmuGNimCBqkv8D+9+ThY8pui0dHYrStYKLSbDFCcRC4+G5vK0NsD0taXSFe7QH3vYYUiQXr2qGYajtrlX9H1y7jRzdHkE91xdrTYQQuxFYrWb9Zebc9FVudMYNWOl43xh3+bUUb/OMPKVxoJT1HzHnDseIWYy7UEtcPHiO++l2Up2mYJ2/WsA7gwp5qzZ1bUc8jW1LB2b6ue2pCGvgWCRjD+WkLWAxPsLUK/F74LmQ379rFYIFSmUYu0JxDiho9BrbYYVC+FQ9+3pP4WZFhi9oWrJJLXVhG3HmJ2JFFLmZMmRCfiN7DNpScba+eX5CIGmS3O0ebTgmNUgGEVj7P5TpEDwdd+s0vYRa9OR4nmFnvui/FcMyTLlr4sVBJ3zWSVsbklAhPs0c+7/hhpv2AY4WKkkK4l0UNlXg6vgQHnPUN6QpQudEN2PbcBbKuzO8eU=) 2026-04-13 00:22:56.194623 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAvjiBONBphP1pw6VQjNZtjPScKZypG1VwMOQRj2wBhMV6B82l1xc4naqotEmnRLDx13DXwaSWOz7FCZFUEPNGo=) 2026-04-13 00:22:56.194789 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBplVeK/jcpeoG+em7vaQqKURYz/HOO7LRqwDNOVmiZ/) 2026-04-13 00:22:56.194803 | orchestrator | 2026-04-13 00:22:56.194810 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-13 00:22:56.194817 | orchestrator | Monday 13 April 2026 00:22:45 +0000 (0:00:01.131) 0:00:13.923 ********** 2026-04-13 00:22:56.194824 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvdPCqIOBhGmzjpPYb/jm7+G4GEdTt0uOSvwHEH8KPNTlRMxTV5q6sh8U3qandWEph6/FrXOtfa8wFGPl9tUq7+cPA8tgemUWmJE5Bu4uiq4NWI4Qke6TeYMO/JrIqQ1xAvulI6XND00rcMyqxYObuceNswYgPuHGoELRiXLVDhmWkmEx3XKnty+ysWta+J4YCYJq4Ww2rCI5nhdgOxnjgKy1US9r9KT+naohs422vlPLCIBJwwWxS4iEPTAypYAIfrqMsAFFIgDisy6Wl5QeV+Gyp3KrWsRH6HyNQx3HYj4whMgPhOS1XAsMW+gNXzXJOcavZ7eyHOU4AJWho/2fkhCqCQpjgr6VqjQCESQZfvpKaj4hHYQkVNrvZ9jqpmkcA2QZUk94Ihl8dpuVauPiwRTt1XMgaK3r7I5F0XGc6YkWnNh05wdhfJsDdGG1QsPJ+Njd9lpFtj213ymES3A2pSbo043BvvRvS7Q75xa8x3nE3vXY+2XOdvDVX7XhtKUc=) 2026-04-13 00:22:56.194850 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIujhJsJhRF6UExjpwltxScY/UE0ddBhpeykEzrtEcd5P2k/Awd3ZWf3VJzz677mFI59Ih92XuR1vBGJypEXHYg=) 2026-04-13 00:22:56.194855 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINtCKOzj+8RNSWFo46KMyjHN4XaCHhisUHzTkpQiNffK) 2026-04-13 00:22:56.194861 | orchestrator | 2026-04-13 00:22:56.194866 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-04-13 00:22:56.194872 | orchestrator | Monday 13 April 2026 00:22:46 +0000 (0:00:01.102) 0:00:15.025 ********** 2026-04-13 00:22:56.194878 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-13 00:22:56.194884 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-13 00:22:56.194889 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-13 00:22:56.194894 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-13 00:22:56.194899 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-13 00:22:56.194904 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-13 00:22:56.194909 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-13 00:22:56.194914 | orchestrator | 2026-04-13 00:22:56.194919 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-04-13 00:22:56.194925 | orchestrator | Monday 13 April 2026 00:22:51 +0000 (0:00:05.309) 0:00:20.335 ********** 2026-04-13 00:22:56.194931 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-13 00:22:56.194938 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-13 00:22:56.194943 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-13 00:22:56.194948 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-13 00:22:56.194954 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-13 00:22:56.194959 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-13 00:22:56.194963 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-13 00:22:56.194968 | orchestrator | 2026-04-13 00:22:56.194973 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-13 00:22:56.194978 | orchestrator | Monday 13 April 2026 00:22:51 +0000 (0:00:00.185) 0:00:20.521 ********** 2026-04-13 00:22:56.194999 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFr5hTYo9PZJLmSLXBiTwsGnnk8A2I7P35SlU6K4rTIj) 2026-04-13 00:22:56.195021 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7tUdZlcC02me+uXbzA1V72iGHRb/XLsbET04079/3vDw4yyPOpF+eMEEqGl0i1TREQxQ0JM1SatMha8z+3YCW4ynWS3zr89Ywq4JI2fjjDgCsoe29RwyOAATwE5TFHdUauaKYp+I8+ixFBvKbmxLjWLZnADCSyWSpcSN+WU6ZLMMggYE8wXiO27Cb3hBrvgoc3z68LwG0Y5rhOfDBhBVhPB0mkgj6R7tKmi0a1seXRMqr8OAPs4fvp9tYKRBrU1UjvItvGqY8s3tDLdF5PBC3W+ftIftNhjN8mD324tbwuhfYyUcsPypCgfuVXa4Y1MM6O7XUKMfT2zTDj4bOv4/ZH8Fh30ahbrq0+yAfxRJZ6cJdM0XVtLp3KKXSNlyO9idDG87fITZIXewqQtokX6d8IFWE3Qtg2RkRxwYdz48m8weivruyxZrfKF2r+SKZSY4bgUuXND+oRbL2michfugFmjp4ZLqKMFAPYuMYrmapyrMRk5t/7RnV/sj42hGCT70=) 2026-04-13 00:22:56.195032 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK40STEL16tcq1CkzlrJWf03aF31OJB0Vann42reYOjv47ti+cSdrNO16XeSfFoFossUzZJChD9affrPxZYKndA=) 2026-04-13 00:22:56.195038 | orchestrator | 2026-04-13 00:22:56.195043 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-13 00:22:56.195048 | orchestrator | Monday 13 April 2026 00:22:52 +0000 (0:00:01.090) 0:00:21.612 ********** 2026-04-13 00:22:56.195053 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCcoBS7qG/nkJ+uVjuEol8IQKjiCg56KmkdcPLv7seENaUrE/w74PfsM8b+HF2VTlMgzJEwmB9Do84KppIqr+aXnS3ahsKiVsMFBekoLWFXF16wRz4p/iKyr/S/DLx7GLUzctstahhP6kwaRqXpiPDEVwjKFJGa3pmPlRpZJxNTye2//EEkwA8FyTliO8N0Q2dal146vm25aqagEYC/i5X0M/amDFE9Q10CJ59SkhcdFCRVZUmxaIy8SMjPZuUH9koARZn9LXIX6YoQQ4g38AAsGBDOANJ+5s+pG7u9c5HQ6FpogCRYt/ijlBsXvfF4BjYrXhtRx8u9m38hLY763AWYRCuJeATgq4myb7tPSfNd3B85OB7G4nXwuCDU6ApJ6jMIG5o/zUYe3HO8b1fVbYCPYSbJbzYJksISFFsXR6umjl89bA7/vhrO/TUgJVJBAghAwobjPcmcWIq/HnqduWCIn0GmWg02O2d0p9KwUWC7udms3QpaSThQpW5VyjQ0HBs=) 2026-04-13 00:22:56.195059 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDRwqtLI7woEqwVMa9GxhLoVfI4RBBCDKg+ldhQDj6v3iwZRv+/11IP8GS71HUp9wIET+GgtexuFNoJ8HREB+Kk=) 2026-04-13 00:22:56.195064 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHNHfw08S1VCv+yfJrfvxxXIU74r1ZV0wkO6KfHO2Mbv) 2026-04-13 00:22:56.195069 | orchestrator | 2026-04-13 00:22:56.195074 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-13 00:22:56.195079 | orchestrator | Monday 13 April 2026 00:22:53 +0000 (0:00:01.100) 0:00:22.712 ********** 2026-04-13 00:22:56.195084 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4mtw568ESnCH4DehnASVkRrTuXGa8xRjey//zau3DXVj5oAlj4bfAlyKgLCGjrwf8uykFsWOSlqSPH7eNKIIou83PesYdG8zTGYyBdlpc0QZTQgZyr/xAK3ht4S1sNv9fthpkE2hLPr1b70izGioCakEQyKPaFLnfoTo11uHgy6QNohUood/GtTsnUHrAR2IrfxS/iHuyk5RhdwBatYElqeIg4w0+UUjevNbCvi1SyMEmGwokQTspnWD0SkJXx8Y6JmC3HpDXDnBP0TL4mJRqp6SiAdKklQBreMw8zerKPvEcLrMwfvwNZSXroafQfWhfWHKwh6QPWtLjSHvKALljJEOLHhsaIPB+bsklvcyKoADoSzaKeTo11GxPAMhG6oaYJutz5VHnOj6inRjijRdLWnW/KPuDIJOOsC265EdaMWG1fdAqEBQgdfEX+Jje/gYkeCXT/oyLBf9l4E1QFq9S3XkXQoxYQU96OSR5QnBXsPvxSzEGC5O6vd6Ja4A/9g8=) 2026-04-13 00:22:56.195089 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJmK5oYsPnjmdMiqW/s4BQec2r1RrTOwZLkeEemsjoll) 2026-04-13 00:22:56.195094 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCDuKXWGE2fksVEbXpfODKdQPMQLjhq35Yz9T4+rEwFy2nSi0vHHvwpeQObN1Ma0Aw7mG2BXc9REjABqiLFYFFY=) 2026-04-13 00:22:56.195099 | orchestrator | 2026-04-13 00:22:56.195104 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-13 00:22:56.195109 | orchestrator | Monday 13 April 2026 00:22:55 +0000 (0:00:01.199) 0:00:23.912 ********** 2026-04-13 00:22:56.195113 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMyboGiUBjhj1GsE5+tPPon88BlQ20FOLO2r2SKn7jvKehBa7MHZLSCv0DfMhNAhn3gzk3eoSJzAGNlEwVchbuk=) 2026-04-13 00:22:56.195119 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDm0ZdiFgkcxIkZF8mzp84XEM5knpGsO5rHP6nu3uIINDq8veyNbW41ECPXyJvBGmZ4hIbuZf7HBUFCTuoPruPb/QUkIDqrXsQ09WvwX8OkRNVHCGFw+wU6t3AigJw1y1cbtUo/oxu1tHecemWFSmRxPPaRlKYk9OwXuMOAxKSQ3kIrI8T/+mqrhbUMZbMelIY5NbWsmNkGGP0g69LbCeBq0x+JJwzzfSdkoNuNX3g/vMPxc3DvuEILceqCnFWtHLmEBaOyTG/rXUppKkqo3qQ2cdEiTK1bYLyXp7szmI/7JVs0IGF3NL0ECfHnMsWpIkfDlWgOkWnFUgW4J1xw7/jvt+ed2Bt0TIxPUKa31knKIr3OzgACYt8bLxKUE2lPFREHozf0YIjBJCpFIDnGGhvQgLcZqOipEcjcL4YBAZ7KzyDHvzIjf31npiNgNvufDcnoz5Lv1DRlXI+QrR+msDohrPV3q/LLOTHqwJhczjaIDy+VpY/UKMoETzvMtyt9A8E=) 2026-04-13 00:22:56.195133 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKmQpqAOdeO5x6FxeK5knu0Dvdl/dR3PS6SzBSJWl3MO) 2026-04-13 00:23:00.687112 | orchestrator | 2026-04-13 00:23:00.687306 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-13 00:23:00.687331 | orchestrator | Monday 13 April 2026 00:22:56 +0000 (0:00:01.117) 0:00:25.029 ********** 2026-04-13 00:23:00.687345 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+yxN+Py6FSngQ9tzFT4guuralg+1AbLH98H64Cbvc59+slzWhXb9AxFVgpwrLV97SSPmTw+6nBUWGCJmnTNYEMz9wEjR5aAI1nM4dmPBUk1v2FAkOhmFLRadMlV+2AwDLehXoiRyHTrkCszrTJnhJmfm5FODmsuHLcqYjeyV3RaykgQaiKo19rRoXbdtpPfoEmcU5EA+bpOKXdRh/IpAojIkM0yqvz8eeUeyWNUPNHyVbAU0CquuhRQ0rOEhuP1HEKTqUITLiFQPiLREuW4Z8hkIrufjV4KpCgHt2HcuO4AA4wRzAaisSHXGX2EUMxL24zeT26p2Tom9U9endmdoxlcWe1AV6kZD2go4ixqVjHnLQTaGP1VfXbej66e4Phia8Oy9Meb18y7vYJpMMfTj2MRCsdLmpLM4eIZr1fU3Ty1RTNLrIwuw57524/haZj7C9NGnCSnSH9yPb3tBfLB0PBOUImAM8uGvkET27j2pGbcTgCJqyq8lFFRYEhjQ77l8=) 2026-04-13 00:23:00.687361 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOMHxfwXaayn86YxFPoYc+Lon8inya+a8h0owSGf3cSVO9hjfffcrck2YTGogdeelfgba6ep08Tp0DIHwD6/U0g=) 2026-04-13 00:23:00.687374 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICR+IJtWdY5euzipjImF1KC5YpufPn/be7rrwSL4MN12) 2026-04-13 00:23:00.687387 | orchestrator | 2026-04-13 00:23:00.687397 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-13 00:23:00.687408 | orchestrator | Monday 13 April 2026 00:22:57 +0000 (0:00:01.148) 0:00:26.177 ********** 2026-04-13 00:23:00.687419 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDd6dHsJa5QViccvff2gePcTN2rsgA2exX/+Hu7aC6Hx178jZAJCAQRl/mic//2SH1zmuGNimCBqkv8D+9+ThY8pui0dHYrStYKLSbDFCcRC4+G5vK0NsD0taXSFe7QH3vYYUiQXr2qGYajtrlX9H1y7jRzdHkE91xdrTYQQuxFYrWb9Zebc9FVudMYNWOl43xh3+bUUb/OMPKVxoJT1HzHnDseIWYy7UEtcPHiO++l2Up2mYJ2/WsA7gwp5qzZ1bUc8jW1LB2b6ue2pCGvgWCRjD+WkLWAxPsLUK/F74LmQ379rFYIFSmUYu0JxDiho9BrbYYVC+FQ9+3pP4WZFhi9oWrJJLXVhG3HmJ2JFFLmZMmRCfiN7DNpScba+eX5CIGmS3O0ebTgmNUgGEVj7P5TpEDwdd+s0vYRa9OR4nmFnvui/FcMyTLlr4sVBJ3zWSVsbklAhPs0c+7/hhpv2AY4WKkkK4l0UNlXg6vgQHnPUN6QpQudEN2PbcBbKuzO8eU=) 2026-04-13 00:23:00.687431 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBplVeK/jcpeoG+em7vaQqKURYz/HOO7LRqwDNOVmiZ/) 2026-04-13 00:23:00.687462 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAvjiBONBphP1pw6VQjNZtjPScKZypG1VwMOQRj2wBhMV6B82l1xc4naqotEmnRLDx13DXwaSWOz7FCZFUEPNGo=) 2026-04-13 00:23:00.687475 | orchestrator | 2026-04-13 00:23:00.687486 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-13 00:23:00.687497 | orchestrator | Monday 13 April 2026 00:22:58 +0000 (0:00:01.110) 0:00:27.288 ********** 2026-04-13 00:23:00.687508 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIujhJsJhRF6UExjpwltxScY/UE0ddBhpeykEzrtEcd5P2k/Awd3ZWf3VJzz677mFI59Ih92XuR1vBGJypEXHYg=) 2026-04-13 00:23:00.687520 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvdPCqIOBhGmzjpPYb/jm7+G4GEdTt0uOSvwHEH8KPNTlRMxTV5q6sh8U3qandWEph6/FrXOtfa8wFGPl9tUq7+cPA8tgemUWmJE5Bu4uiq4NWI4Qke6TeYMO/JrIqQ1xAvulI6XND00rcMyqxYObuceNswYgPuHGoELRiXLVDhmWkmEx3XKnty+ysWta+J4YCYJq4Ww2rCI5nhdgOxnjgKy1US9r9KT+naohs422vlPLCIBJwwWxS4iEPTAypYAIfrqMsAFFIgDisy6Wl5QeV+Gyp3KrWsRH6HyNQx3HYj4whMgPhOS1XAsMW+gNXzXJOcavZ7eyHOU4AJWho/2fkhCqCQpjgr6VqjQCESQZfvpKaj4hHYQkVNrvZ9jqpmkcA2QZUk94Ihl8dpuVauPiwRTt1XMgaK3r7I5F0XGc6YkWnNh05wdhfJsDdGG1QsPJ+Njd9lpFtj213ymES3A2pSbo043BvvRvS7Q75xa8x3nE3vXY+2XOdvDVX7XhtKUc=) 2026-04-13 00:23:00.687556 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINtCKOzj+8RNSWFo46KMyjHN4XaCHhisUHzTkpQiNffK) 2026-04-13 00:23:00.687568 | orchestrator | 2026-04-13 00:23:00.687579 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-04-13 00:23:00.687590 | orchestrator | Monday 13 April 2026 00:22:59 +0000 (0:00:01.091) 0:00:28.380 ********** 2026-04-13 00:23:00.687602 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-13 00:23:00.687613 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-13 00:23:00.687623 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-13 00:23:00.687634 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-13 00:23:00.687681 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-13 00:23:00.687696 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-13 00:23:00.687708 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-13 00:23:00.687720 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:23:00.687734 | orchestrator | 2026-04-13 00:23:00.687767 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-04-13 00:23:00.687782 | orchestrator | Monday 13 April 2026 00:22:59 +0000 (0:00:00.244) 0:00:28.624 ********** 2026-04-13 00:23:00.687794 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:23:00.687807 | orchestrator | 2026-04-13 00:23:00.687820 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-04-13 00:23:00.687832 | orchestrator | Monday 13 April 2026 00:22:59 +0000 (0:00:00.057) 0:00:28.682 ********** 2026-04-13 00:23:00.687844 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:23:00.687857 | orchestrator | 2026-04-13 00:23:00.687869 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-04-13 00:23:00.687882 | orchestrator | Monday 13 April 2026 00:22:59 +0000 (0:00:00.055) 0:00:28.738 ********** 2026-04-13 00:23:00.687894 | orchestrator | changed: [testbed-manager] 2026-04-13 00:23:00.687907 | orchestrator | 2026-04-13 00:23:00.687921 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:23:00.687933 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-13 00:23:00.687945 | orchestrator | 2026-04-13 00:23:00.687956 | orchestrator | 2026-04-13 00:23:00.687967 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:23:00.687977 | orchestrator | Monday 13 April 2026 00:23:00 +0000 (0:00:00.525) 0:00:29.263 ********** 2026-04-13 00:23:00.687988 | orchestrator | =============================================================================== 2026-04-13 00:23:00.687999 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.63s 2026-04-13 00:23:00.688038 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.31s 2026-04-13 00:23:00.688050 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.32s 2026-04-13 00:23:00.688061 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2026-04-13 00:23:00.688071 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-04-13 00:23:00.688082 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-04-13 00:23:00.688092 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-04-13 00:23:00.688103 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-04-13 00:23:00.688113 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-04-13 00:23:00.688133 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-04-13 00:23:00.688144 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-04-13 00:23:00.688155 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-04-13 00:23:00.688165 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-04-13 00:23:00.688176 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-04-13 00:23:00.688187 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-04-13 00:23:00.688197 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-04-13 00:23:00.688208 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.53s 2026-04-13 00:23:00.688219 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.24s 2026-04-13 00:23:00.688229 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.19s 2026-04-13 00:23:00.688240 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2026-04-13 00:23:00.893388 | orchestrator | + osism apply squid 2026-04-13 00:23:12.275565 | orchestrator | 2026-04-13 00:23:12 | INFO  | Prepare task for execution of squid. 2026-04-13 00:23:12.347977 | orchestrator | 2026-04-13 00:23:12 | INFO  | Task d7028b27-c3f2-4d32-9aa2-1f58a178db93 (squid) was prepared for execution. 2026-04-13 00:23:12.348070 | orchestrator | 2026-04-13 00:23:12 | INFO  | It takes a moment until task d7028b27-c3f2-4d32-9aa2-1f58a178db93 (squid) has been started and output is visible here. 2026-04-13 00:25:07.019937 | orchestrator | 2026-04-13 00:25:07.020017 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-04-13 00:25:07.020030 | orchestrator | 2026-04-13 00:25:07.020041 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-04-13 00:25:07.020051 | orchestrator | Monday 13 April 2026 00:23:15 +0000 (0:00:00.195) 0:00:00.195 ********** 2026-04-13 00:25:07.020060 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-04-13 00:25:07.020071 | orchestrator | 2026-04-13 00:25:07.020080 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-04-13 00:25:07.020090 | orchestrator | Monday 13 April 2026 00:23:15 +0000 (0:00:00.103) 0:00:00.299 ********** 2026-04-13 00:25:07.020099 | orchestrator | ok: [testbed-manager] 2026-04-13 00:25:07.020109 | orchestrator | 2026-04-13 00:25:07.020118 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-04-13 00:25:07.020127 | orchestrator | Monday 13 April 2026 00:23:18 +0000 (0:00:02.526) 0:00:02.825 ********** 2026-04-13 00:25:07.020137 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-04-13 00:25:07.020146 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-04-13 00:25:07.020156 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-04-13 00:25:07.020165 | orchestrator | 2026-04-13 00:25:07.020174 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-04-13 00:25:07.020184 | orchestrator | Monday 13 April 2026 00:23:19 +0000 (0:00:01.323) 0:00:04.149 ********** 2026-04-13 00:25:07.020193 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-04-13 00:25:07.020202 | orchestrator | 2026-04-13 00:25:07.020212 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-04-13 00:25:07.020221 | orchestrator | Monday 13 April 2026 00:23:20 +0000 (0:00:01.079) 0:00:05.228 ********** 2026-04-13 00:25:07.020231 | orchestrator | ok: [testbed-manager] 2026-04-13 00:25:07.020240 | orchestrator | 2026-04-13 00:25:07.020249 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-04-13 00:25:07.020259 | orchestrator | Monday 13 April 2026 00:23:20 +0000 (0:00:00.361) 0:00:05.590 ********** 2026-04-13 00:25:07.020288 | orchestrator | changed: [testbed-manager] 2026-04-13 00:25:07.020302 | orchestrator | 2026-04-13 00:25:07.020311 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-04-13 00:25:07.020320 | orchestrator | Monday 13 April 2026 00:23:21 +0000 (0:00:00.909) 0:00:06.499 ********** 2026-04-13 00:25:07.020329 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-04-13 00:25:07.020338 | orchestrator | ok: [testbed-manager] 2026-04-13 00:25:07.020347 | orchestrator | 2026-04-13 00:25:07.020357 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-04-13 00:25:07.020366 | orchestrator | Monday 13 April 2026 00:23:53 +0000 (0:00:32.000) 0:00:38.500 ********** 2026-04-13 00:25:07.020375 | orchestrator | changed: [testbed-manager] 2026-04-13 00:25:07.020384 | orchestrator | 2026-04-13 00:25:07.020393 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-04-13 00:25:07.020402 | orchestrator | Monday 13 April 2026 00:24:05 +0000 (0:00:12.113) 0:00:50.613 ********** 2026-04-13 00:25:07.020412 | orchestrator | Pausing for 60 seconds 2026-04-13 00:25:07.020421 | orchestrator | changed: [testbed-manager] 2026-04-13 00:25:07.020430 | orchestrator | 2026-04-13 00:25:07.020451 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-04-13 00:25:07.020461 | orchestrator | Monday 13 April 2026 00:25:06 +0000 (0:01:00.111) 0:01:50.725 ********** 2026-04-13 00:25:07.020470 | orchestrator | ok: [testbed-manager] 2026-04-13 00:25:07.020479 | orchestrator | 2026-04-13 00:25:07.020489 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-04-13 00:25:07.020498 | orchestrator | Monday 13 April 2026 00:25:06 +0000 (0:00:00.066) 0:01:50.791 ********** 2026-04-13 00:25:07.020545 | orchestrator | changed: [testbed-manager] 2026-04-13 00:25:07.020556 | orchestrator | 2026-04-13 00:25:07.020567 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:25:07.020577 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:25:07.020587 | orchestrator | 2026-04-13 00:25:07.020598 | orchestrator | 2026-04-13 00:25:07.020608 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:25:07.020619 | orchestrator | Monday 13 April 2026 00:25:06 +0000 (0:00:00.656) 0:01:51.448 ********** 2026-04-13 00:25:07.020629 | orchestrator | =============================================================================== 2026-04-13 00:25:07.020639 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.11s 2026-04-13 00:25:07.020650 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.00s 2026-04-13 00:25:07.020661 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.11s 2026-04-13 00:25:07.020672 | orchestrator | osism.services.squid : Install required packages ------------------------ 2.53s 2026-04-13 00:25:07.020682 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.32s 2026-04-13 00:25:07.020692 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.08s 2026-04-13 00:25:07.020702 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.91s 2026-04-13 00:25:07.020712 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.66s 2026-04-13 00:25:07.020723 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.36s 2026-04-13 00:25:07.020733 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2026-04-13 00:25:07.020744 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-04-13 00:25:07.163047 | orchestrator | + [[ 10.0.0 != \l\a\t\e\s\t ]] 2026-04-13 00:25:07.163651 | orchestrator | ++ semver 10.0.0 10.0.0-0 2026-04-13 00:25:07.223573 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-13 00:25:07.223655 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release/2024.2 2026-04-13 00:25:07.231854 | orchestrator | + set -e 2026-04-13 00:25:07.231965 | orchestrator | + NAMESPACE=kolla/release/2024.2 2026-04-13 00:25:07.231982 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release/2024.2#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-13 00:25:07.237495 | orchestrator | ++ semver 10.0.0 9.0.0 2026-04-13 00:25:07.295398 | orchestrator | + [[ 1 -lt 0 ]] 2026-04-13 00:25:07.296278 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-04-13 00:25:18.496323 | orchestrator | 2026-04-13 00:25:18 | INFO  | Prepare task for execution of operator. 2026-04-13 00:25:18.581028 | orchestrator | 2026-04-13 00:25:18 | INFO  | Task 74075ebf-624f-4976-a9b2-290b8d728f17 (operator) was prepared for execution. 2026-04-13 00:25:18.581141 | orchestrator | 2026-04-13 00:25:18 | INFO  | It takes a moment until task 74075ebf-624f-4976-a9b2-290b8d728f17 (operator) has been started and output is visible here. 2026-04-13 00:25:33.820737 | orchestrator | 2026-04-13 00:25:33.820863 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-04-13 00:25:33.820879 | orchestrator | 2026-04-13 00:25:33.820891 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-13 00:25:33.820902 | orchestrator | Monday 13 April 2026 00:25:21 +0000 (0:00:00.212) 0:00:00.212 ********** 2026-04-13 00:25:33.820912 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:25:33.820923 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:25:33.820932 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:25:33.820942 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:25:33.820951 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:25:33.820960 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:25:33.820969 | orchestrator | 2026-04-13 00:25:33.820979 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-04-13 00:25:33.820989 | orchestrator | Monday 13 April 2026 00:25:25 +0000 (0:00:03.394) 0:00:03.607 ********** 2026-04-13 00:25:33.820998 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:25:33.821008 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:25:33.821017 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:25:33.821026 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:25:33.821036 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:25:33.821045 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:25:33.821055 | orchestrator | 2026-04-13 00:25:33.821064 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-04-13 00:25:33.821073 | orchestrator | 2026-04-13 00:25:33.821083 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-13 00:25:33.821092 | orchestrator | Monday 13 April 2026 00:25:26 +0000 (0:00:00.860) 0:00:04.467 ********** 2026-04-13 00:25:33.821103 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:25:33.821112 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:25:33.821121 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:25:33.821130 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:25:33.821140 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:25:33.821149 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:25:33.821158 | orchestrator | 2026-04-13 00:25:33.821168 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-13 00:25:33.821178 | orchestrator | Monday 13 April 2026 00:25:26 +0000 (0:00:00.178) 0:00:04.646 ********** 2026-04-13 00:25:33.821187 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:25:33.821196 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:25:33.821206 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:25:33.821215 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:25:33.821224 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:25:33.821233 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:25:33.821243 | orchestrator | 2026-04-13 00:25:33.821252 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-13 00:25:33.821264 | orchestrator | Monday 13 April 2026 00:25:26 +0000 (0:00:00.164) 0:00:04.810 ********** 2026-04-13 00:25:33.821275 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:25:33.821287 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:25:33.821299 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:25:33.821328 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:25:33.821339 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:25:33.821350 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:25:33.821361 | orchestrator | 2026-04-13 00:25:33.821373 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-13 00:25:33.821383 | orchestrator | Monday 13 April 2026 00:25:27 +0000 (0:00:00.700) 0:00:05.511 ********** 2026-04-13 00:25:33.821392 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:25:33.821401 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:25:33.821410 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:25:33.821420 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:25:33.821429 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:25:33.821438 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:25:33.821448 | orchestrator | 2026-04-13 00:25:33.821457 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-13 00:25:33.821467 | orchestrator | Monday 13 April 2026 00:25:28 +0000 (0:00:00.894) 0:00:06.406 ********** 2026-04-13 00:25:33.821476 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-04-13 00:25:33.821519 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-04-13 00:25:33.821531 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-04-13 00:25:33.821547 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-04-13 00:25:33.821581 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-04-13 00:25:33.821597 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-04-13 00:25:33.821628 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-04-13 00:25:33.821645 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-04-13 00:25:33.821658 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-04-13 00:25:33.821667 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-04-13 00:25:33.821677 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-04-13 00:25:33.821686 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-04-13 00:25:33.821696 | orchestrator | 2026-04-13 00:25:33.821705 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-13 00:25:33.821715 | orchestrator | Monday 13 April 2026 00:25:29 +0000 (0:00:01.143) 0:00:07.549 ********** 2026-04-13 00:25:33.821725 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:25:33.821734 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:25:33.821789 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:25:33.821800 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:25:33.821809 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:25:33.821819 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:25:33.821828 | orchestrator | 2026-04-13 00:25:33.821838 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-13 00:25:33.821848 | orchestrator | Monday 13 April 2026 00:25:30 +0000 (0:00:01.363) 0:00:08.913 ********** 2026-04-13 00:25:33.821858 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-04-13 00:25:33.821868 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-04-13 00:25:33.821877 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-04-13 00:25:33.821886 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-04-13 00:25:33.821897 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-04-13 00:25:33.821923 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-04-13 00:25:33.821933 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-04-13 00:25:33.821942 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-04-13 00:25:33.821952 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-04-13 00:25:33.821961 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-04-13 00:25:33.821971 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-04-13 00:25:33.821980 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-04-13 00:25:33.821999 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-04-13 00:25:33.822009 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-04-13 00:25:33.822180 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-04-13 00:25:33.822191 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-04-13 00:25:33.822201 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-04-13 00:25:33.822210 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-04-13 00:25:33.822220 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-04-13 00:25:33.822229 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-04-13 00:25:33.822239 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-04-13 00:25:33.822248 | orchestrator | 2026-04-13 00:25:33.822258 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-13 00:25:33.822268 | orchestrator | Monday 13 April 2026 00:25:31 +0000 (0:00:01.211) 0:00:10.125 ********** 2026-04-13 00:25:33.822278 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:25:33.822288 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:25:33.822297 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:25:33.822307 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:25:33.822316 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:25:33.822326 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:25:33.822335 | orchestrator | 2026-04-13 00:25:33.822345 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-13 00:25:33.822366 | orchestrator | Monday 13 April 2026 00:25:31 +0000 (0:00:00.176) 0:00:10.301 ********** 2026-04-13 00:25:33.822376 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:25:33.822386 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:25:33.822395 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:25:33.822405 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:25:33.822414 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:25:33.822423 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:25:33.822433 | orchestrator | 2026-04-13 00:25:33.822442 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-13 00:25:33.822452 | orchestrator | Monday 13 April 2026 00:25:32 +0000 (0:00:00.183) 0:00:10.485 ********** 2026-04-13 00:25:33.822462 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:25:33.822471 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:25:33.822521 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:25:33.822532 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:25:33.822542 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:25:33.822551 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:25:33.822560 | orchestrator | 2026-04-13 00:25:33.822570 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-13 00:25:33.822580 | orchestrator | Monday 13 April 2026 00:25:32 +0000 (0:00:00.535) 0:00:11.020 ********** 2026-04-13 00:25:33.822589 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:25:33.822598 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:25:33.822608 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:25:33.822617 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:25:33.822626 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:25:33.822636 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:25:33.822645 | orchestrator | 2026-04-13 00:25:33.822654 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-13 00:25:33.822664 | orchestrator | Monday 13 April 2026 00:25:32 +0000 (0:00:00.180) 0:00:11.201 ********** 2026-04-13 00:25:33.822674 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-13 00:25:33.822683 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-13 00:25:33.822692 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:25:33.822710 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:25:33.822719 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-13 00:25:33.822729 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-13 00:25:33.822739 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:25:33.822748 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-13 00:25:33.822757 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:25:33.822767 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:25:33.822776 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-13 00:25:33.822786 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:25:33.822795 | orchestrator | 2026-04-13 00:25:33.822805 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-13 00:25:33.822814 | orchestrator | Monday 13 April 2026 00:25:33 +0000 (0:00:00.675) 0:00:11.877 ********** 2026-04-13 00:25:33.822824 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:25:33.822833 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:25:33.822842 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:25:33.822852 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:25:33.822861 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:25:33.822870 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:25:33.822879 | orchestrator | 2026-04-13 00:25:33.822889 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-13 00:25:33.822898 | orchestrator | Monday 13 April 2026 00:25:33 +0000 (0:00:00.150) 0:00:12.027 ********** 2026-04-13 00:25:33.822908 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:25:33.822922 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:25:33.822931 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:25:33.822941 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:25:33.822960 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:25:35.153050 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:25:35.153151 | orchestrator | 2026-04-13 00:25:35.153167 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-13 00:25:35.153180 | orchestrator | Monday 13 April 2026 00:25:33 +0000 (0:00:00.163) 0:00:12.191 ********** 2026-04-13 00:25:35.153191 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:25:35.153202 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:25:35.153213 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:25:35.153224 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:25:35.153235 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:25:35.153246 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:25:35.153256 | orchestrator | 2026-04-13 00:25:35.153267 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-13 00:25:35.153279 | orchestrator | Monday 13 April 2026 00:25:33 +0000 (0:00:00.160) 0:00:12.352 ********** 2026-04-13 00:25:35.153298 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:25:35.153316 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:25:35.153334 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:25:35.153351 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:25:35.153367 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:25:35.153384 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:25:35.153404 | orchestrator | 2026-04-13 00:25:35.153423 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-13 00:25:35.153441 | orchestrator | Monday 13 April 2026 00:25:34 +0000 (0:00:00.694) 0:00:13.046 ********** 2026-04-13 00:25:35.153460 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:25:35.153544 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:25:35.153559 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:25:35.153572 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:25:35.153585 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:25:35.153598 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:25:35.153610 | orchestrator | 2026-04-13 00:25:35.153622 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:25:35.153636 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-13 00:25:35.153679 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-13 00:25:35.153693 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-13 00:25:35.153706 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-13 00:25:35.153724 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-13 00:25:35.153744 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-13 00:25:35.153781 | orchestrator | 2026-04-13 00:25:35.153811 | orchestrator | 2026-04-13 00:25:35.153828 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:25:35.153846 | orchestrator | Monday 13 April 2026 00:25:34 +0000 (0:00:00.246) 0:00:13.293 ********** 2026-04-13 00:25:35.153865 | orchestrator | =============================================================================== 2026-04-13 00:25:35.153884 | orchestrator | Gathering Facts --------------------------------------------------------- 3.40s 2026-04-13 00:25:35.153902 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.36s 2026-04-13 00:25:35.153917 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.21s 2026-04-13 00:25:35.153936 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.14s 2026-04-13 00:25:35.153983 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.89s 2026-04-13 00:25:35.154001 | orchestrator | Do not require tty for all users ---------------------------------------- 0.86s 2026-04-13 00:25:35.154095 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.70s 2026-04-13 00:25:35.154115 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.69s 2026-04-13 00:25:35.154134 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.68s 2026-04-13 00:25:35.154153 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.54s 2026-04-13 00:25:35.154172 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.25s 2026-04-13 00:25:35.154191 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.18s 2026-04-13 00:25:35.154207 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2026-04-13 00:25:35.154224 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.18s 2026-04-13 00:25:35.154240 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.18s 2026-04-13 00:25:35.154256 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.16s 2026-04-13 00:25:35.154273 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2026-04-13 00:25:35.154289 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2026-04-13 00:25:35.154308 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2026-04-13 00:25:35.354236 | orchestrator | + osism apply --environment custom facts 2026-04-13 00:25:36.713143 | orchestrator | 2026-04-13 00:25:36 | INFO  | Trying to run play facts in environment custom 2026-04-13 00:25:46.828856 | orchestrator | 2026-04-13 00:25:46 | INFO  | Prepare task for execution of facts. 2026-04-13 00:25:46.907006 | orchestrator | 2026-04-13 00:25:46 | INFO  | Task da1bdfa4-c0b0-4d12-89d9-669966865b35 (facts) was prepared for execution. 2026-04-13 00:25:46.907088 | orchestrator | 2026-04-13 00:25:46 | INFO  | It takes a moment until task da1bdfa4-c0b0-4d12-89d9-669966865b35 (facts) has been started and output is visible here. 2026-04-13 00:26:28.521770 | orchestrator | 2026-04-13 00:26:28.521881 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-04-13 00:26:28.521898 | orchestrator | 2026-04-13 00:26:28.521911 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-13 00:26:28.521922 | orchestrator | Monday 13 April 2026 00:25:50 +0000 (0:00:00.133) 0:00:00.133 ********** 2026-04-13 00:26:28.521933 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:26:28.521946 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:26:28.521957 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:26:28.521968 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:26:28.521979 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:26:28.521990 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:26:28.522000 | orchestrator | ok: [testbed-manager] 2026-04-13 00:26:28.522012 | orchestrator | 2026-04-13 00:26:28.522089 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-04-13 00:26:28.522102 | orchestrator | Monday 13 April 2026 00:25:51 +0000 (0:00:01.561) 0:00:01.694 ********** 2026-04-13 00:26:28.522112 | orchestrator | ok: [testbed-manager] 2026-04-13 00:26:28.522124 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:26:28.522134 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:26:28.522145 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:26:28.522164 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:26:28.522175 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:26:28.522187 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:26:28.522197 | orchestrator | 2026-04-13 00:26:28.522251 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-04-13 00:26:28.522265 | orchestrator | 2026-04-13 00:26:28.522277 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-13 00:26:28.522288 | orchestrator | Monday 13 April 2026 00:25:52 +0000 (0:00:01.101) 0:00:02.796 ********** 2026-04-13 00:26:28.522300 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:26:28.522313 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:26:28.522326 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:26:28.522339 | orchestrator | 2026-04-13 00:26:28.522352 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-13 00:26:28.522365 | orchestrator | Monday 13 April 2026 00:25:52 +0000 (0:00:00.105) 0:00:02.901 ********** 2026-04-13 00:26:28.522378 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:26:28.522390 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:26:28.522402 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:26:28.522415 | orchestrator | 2026-04-13 00:26:28.522453 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-13 00:26:28.522475 | orchestrator | Monday 13 April 2026 00:25:53 +0000 (0:00:00.204) 0:00:03.106 ********** 2026-04-13 00:26:28.522495 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:26:28.522515 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:26:28.522533 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:26:28.522549 | orchestrator | 2026-04-13 00:26:28.522562 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-13 00:26:28.522575 | orchestrator | Monday 13 April 2026 00:25:53 +0000 (0:00:00.215) 0:00:03.321 ********** 2026-04-13 00:26:28.522589 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:26:28.522603 | orchestrator | 2026-04-13 00:26:28.522665 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-13 00:26:28.522679 | orchestrator | Monday 13 April 2026 00:25:53 +0000 (0:00:00.154) 0:00:03.476 ********** 2026-04-13 00:26:28.522692 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:26:28.522703 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:26:28.522714 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:26:28.522724 | orchestrator | 2026-04-13 00:26:28.522758 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-13 00:26:28.522769 | orchestrator | Monday 13 April 2026 00:25:53 +0000 (0:00:00.374) 0:00:03.851 ********** 2026-04-13 00:26:28.522780 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:26:28.522791 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:26:28.522802 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:26:28.522812 | orchestrator | 2026-04-13 00:26:28.522823 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-13 00:26:28.522834 | orchestrator | Monday 13 April 2026 00:25:53 +0000 (0:00:00.131) 0:00:03.982 ********** 2026-04-13 00:26:28.522845 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:26:28.522856 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:26:28.522866 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:26:28.522877 | orchestrator | 2026-04-13 00:26:28.522887 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-13 00:26:28.522898 | orchestrator | Monday 13 April 2026 00:25:54 +0000 (0:00:00.972) 0:00:04.955 ********** 2026-04-13 00:26:28.522909 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:26:28.522919 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:26:28.522930 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:26:28.522940 | orchestrator | 2026-04-13 00:26:28.522951 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-13 00:26:28.522962 | orchestrator | Monday 13 April 2026 00:25:55 +0000 (0:00:00.414) 0:00:05.369 ********** 2026-04-13 00:26:28.522972 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:26:28.522983 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:26:28.522993 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:26:28.523004 | orchestrator | 2026-04-13 00:26:28.523020 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-13 00:26:28.523031 | orchestrator | Monday 13 April 2026 00:25:56 +0000 (0:00:00.976) 0:00:06.345 ********** 2026-04-13 00:26:28.523042 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:26:28.523053 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:26:28.523063 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:26:28.523074 | orchestrator | 2026-04-13 00:26:28.523085 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-04-13 00:26:28.523095 | orchestrator | Monday 13 April 2026 00:26:12 +0000 (0:00:15.893) 0:00:22.238 ********** 2026-04-13 00:26:28.523106 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:26:28.523117 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:26:28.523127 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:26:28.523138 | orchestrator | 2026-04-13 00:26:28.523149 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-04-13 00:26:28.523180 | orchestrator | Monday 13 April 2026 00:26:12 +0000 (0:00:00.080) 0:00:22.319 ********** 2026-04-13 00:26:28.523192 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:26:28.523202 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:26:28.523213 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:26:28.523224 | orchestrator | 2026-04-13 00:26:28.523234 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-13 00:26:28.523245 | orchestrator | Monday 13 April 2026 00:26:19 +0000 (0:00:07.583) 0:00:29.902 ********** 2026-04-13 00:26:28.523255 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:26:28.523266 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:26:28.523276 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:26:28.523287 | orchestrator | 2026-04-13 00:26:28.523298 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-13 00:26:28.523308 | orchestrator | Monday 13 April 2026 00:26:20 +0000 (0:00:00.464) 0:00:30.367 ********** 2026-04-13 00:26:28.523319 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-04-13 00:26:28.523330 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-04-13 00:26:28.523340 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-04-13 00:26:28.523351 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-04-13 00:26:28.523369 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-04-13 00:26:28.523380 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-04-13 00:26:28.523391 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-04-13 00:26:28.523401 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-04-13 00:26:28.523412 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-04-13 00:26:28.523422 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-04-13 00:26:28.523459 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-04-13 00:26:28.523470 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-04-13 00:26:28.523481 | orchestrator | 2026-04-13 00:26:28.523491 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-13 00:26:28.523502 | orchestrator | Monday 13 April 2026 00:26:23 +0000 (0:00:03.404) 0:00:33.771 ********** 2026-04-13 00:26:28.523512 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:26:28.523523 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:26:28.523534 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:26:28.523544 | orchestrator | 2026-04-13 00:26:28.523555 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-13 00:26:28.523565 | orchestrator | 2026-04-13 00:26:28.523576 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-13 00:26:28.523587 | orchestrator | Monday 13 April 2026 00:26:24 +0000 (0:00:01.210) 0:00:34.982 ********** 2026-04-13 00:26:28.523597 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:26:28.523608 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:26:28.523618 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:26:28.523629 | orchestrator | ok: [testbed-manager] 2026-04-13 00:26:28.523639 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:26:28.523650 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:26:28.523660 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:26:28.523670 | orchestrator | 2026-04-13 00:26:28.523681 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:26:28.523692 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:26:28.523704 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:26:28.523716 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:26:28.523727 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:26:28.523738 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:26:28.523748 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:26:28.523759 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:26:28.523770 | orchestrator | 2026-04-13 00:26:28.523780 | orchestrator | 2026-04-13 00:26:28.523791 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:26:28.523807 | orchestrator | Monday 13 April 2026 00:26:28 +0000 (0:00:03.576) 0:00:38.558 ********** 2026-04-13 00:26:28.523818 | orchestrator | =============================================================================== 2026-04-13 00:26:28.523831 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.89s 2026-04-13 00:26:28.523850 | orchestrator | Install required packages (Debian) -------------------------------------- 7.58s 2026-04-13 00:26:28.523878 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.58s 2026-04-13 00:26:28.523896 | orchestrator | Copy fact files --------------------------------------------------------- 3.40s 2026-04-13 00:26:28.523914 | orchestrator | Create custom facts directory ------------------------------------------- 1.56s 2026-04-13 00:26:28.523931 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.21s 2026-04-13 00:26:28.523959 | orchestrator | Copy fact file ---------------------------------------------------------- 1.10s 2026-04-13 00:26:28.731350 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 0.98s 2026-04-13 00:26:28.731519 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.97s 2026-04-13 00:26:28.731546 | orchestrator | Create custom facts directory ------------------------------------------- 0.46s 2026-04-13 00:26:28.731565 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.41s 2026-04-13 00:26:28.731582 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.37s 2026-04-13 00:26:28.731599 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.22s 2026-04-13 00:26:28.731618 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.20s 2026-04-13 00:26:28.731635 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2026-04-13 00:26:28.731655 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.13s 2026-04-13 00:26:28.731674 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2026-04-13 00:26:28.731692 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.08s 2026-04-13 00:26:28.948207 | orchestrator | + osism apply bootstrap 2026-04-13 00:26:40.331343 | orchestrator | 2026-04-13 00:26:40 | INFO  | Prepare task for execution of bootstrap. 2026-04-13 00:26:40.408893 | orchestrator | 2026-04-13 00:26:40 | INFO  | Task f5b05453-c625-46c8-8a02-f47e3f0f2a17 (bootstrap) was prepared for execution. 2026-04-13 00:26:40.409017 | orchestrator | 2026-04-13 00:26:40 | INFO  | It takes a moment until task f5b05453-c625-46c8-8a02-f47e3f0f2a17 (bootstrap) has been started and output is visible here. 2026-04-13 00:26:56.527134 | orchestrator | 2026-04-13 00:26:56.527235 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-04-13 00:26:56.527248 | orchestrator | 2026-04-13 00:26:56.527258 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-04-13 00:26:56.527267 | orchestrator | Monday 13 April 2026 00:26:43 +0000 (0:00:00.219) 0:00:00.219 ********** 2026-04-13 00:26:56.527274 | orchestrator | ok: [testbed-manager] 2026-04-13 00:26:56.527282 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:26:56.527288 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:26:56.527295 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:26:56.527301 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:26:56.527309 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:26:56.527316 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:26:56.527323 | orchestrator | 2026-04-13 00:26:56.527330 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-13 00:26:56.527337 | orchestrator | 2026-04-13 00:26:56.527344 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-13 00:26:56.527351 | orchestrator | Monday 13 April 2026 00:26:44 +0000 (0:00:00.370) 0:00:00.589 ********** 2026-04-13 00:26:56.527358 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:26:56.527365 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:26:56.527371 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:26:56.527377 | orchestrator | ok: [testbed-manager] 2026-04-13 00:26:56.527383 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:26:56.527387 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:26:56.527391 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:26:56.527394 | orchestrator | 2026-04-13 00:26:56.527399 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-04-13 00:26:56.527452 | orchestrator | 2026-04-13 00:26:56.527458 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-13 00:26:56.527464 | orchestrator | Monday 13 April 2026 00:26:49 +0000 (0:00:04.711) 0:00:05.300 ********** 2026-04-13 00:26:56.527471 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-13 00:26:56.527478 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-13 00:26:56.527484 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-04-13 00:26:56.527490 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-13 00:26:56.527496 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-13 00:26:56.527502 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-13 00:26:56.527508 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-04-13 00:26:56.527515 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-13 00:26:56.527521 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-13 00:26:56.527525 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-13 00:26:56.527529 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-13 00:26:56.527532 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-13 00:26:56.527536 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-13 00:26:56.527540 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-13 00:26:56.527544 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-13 00:26:56.527548 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-04-13 00:26:56.527552 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-13 00:26:56.527556 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-13 00:26:56.527559 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-13 00:26:56.527566 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-13 00:26:56.527572 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-04-13 00:26:56.527577 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:26:56.527583 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:26:56.527589 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-13 00:26:56.527596 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-04-13 00:26:56.527601 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-13 00:26:56.527607 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-13 00:26:56.527614 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-13 00:26:56.527648 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-13 00:26:56.527653 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-13 00:26:56.527657 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-13 00:26:56.527660 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-13 00:26:56.527664 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-13 00:26:56.527668 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-13 00:26:56.527672 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-13 00:26:56.527675 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-13 00:26:56.527679 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-04-13 00:26:56.527684 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:26:56.527688 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-13 00:26:56.527692 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-13 00:26:56.527697 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-13 00:26:56.527701 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:26:56.527714 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-13 00:26:56.527720 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-13 00:26:56.527726 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:26:56.527733 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-13 00:26:56.527739 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-13 00:26:56.527771 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-13 00:26:56.527779 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-13 00:26:56.527786 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-13 00:26:56.527793 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-13 00:26:56.527799 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-13 00:26:56.527806 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:26:56.527813 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-13 00:26:56.527817 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-13 00:26:56.527822 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:26:56.527833 | orchestrator | 2026-04-13 00:26:56.527837 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-04-13 00:26:56.527841 | orchestrator | 2026-04-13 00:26:56.527845 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-04-13 00:26:56.527850 | orchestrator | Monday 13 April 2026 00:26:49 +0000 (0:00:00.497) 0:00:05.797 ********** 2026-04-13 00:26:56.527854 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:26:56.527859 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:26:56.527863 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:26:56.527867 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:26:56.527872 | orchestrator | ok: [testbed-manager] 2026-04-13 00:26:56.527876 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:26:56.527881 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:26:56.527885 | orchestrator | 2026-04-13 00:26:56.527890 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-04-13 00:26:56.527894 | orchestrator | Monday 13 April 2026 00:26:50 +0000 (0:00:01.173) 0:00:06.971 ********** 2026-04-13 00:26:56.527898 | orchestrator | ok: [testbed-manager] 2026-04-13 00:26:56.527903 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:26:56.527907 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:26:56.527912 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:26:56.527916 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:26:56.527920 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:26:56.527924 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:26:56.527929 | orchestrator | 2026-04-13 00:26:56.527933 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-04-13 00:26:56.527937 | orchestrator | Monday 13 April 2026 00:26:52 +0000 (0:00:01.336) 0:00:08.307 ********** 2026-04-13 00:26:56.527942 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:26:56.527949 | orchestrator | 2026-04-13 00:26:56.527953 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-04-13 00:26:56.527958 | orchestrator | Monday 13 April 2026 00:26:52 +0000 (0:00:00.310) 0:00:08.617 ********** 2026-04-13 00:26:56.527962 | orchestrator | changed: [testbed-manager] 2026-04-13 00:26:56.527966 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:26:56.527971 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:26:56.527979 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:26:56.527983 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:26:56.527989 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:26:56.527995 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:26:56.528002 | orchestrator | 2026-04-13 00:26:56.528008 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-04-13 00:26:56.528020 | orchestrator | Monday 13 April 2026 00:26:53 +0000 (0:00:01.523) 0:00:10.141 ********** 2026-04-13 00:26:56.528026 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:26:56.528035 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:26:56.528044 | orchestrator | 2026-04-13 00:26:56.528051 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-04-13 00:26:56.528058 | orchestrator | Monday 13 April 2026 00:26:54 +0000 (0:00:00.294) 0:00:10.435 ********** 2026-04-13 00:26:56.528064 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:26:56.528071 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:26:56.528077 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:26:56.528083 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:26:56.528089 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:26:56.528096 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:26:56.528102 | orchestrator | 2026-04-13 00:26:56.528106 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-04-13 00:26:56.528109 | orchestrator | Monday 13 April 2026 00:26:55 +0000 (0:00:01.042) 0:00:11.478 ********** 2026-04-13 00:26:56.528113 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:26:56.528117 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:26:56.528120 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:26:56.528124 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:26:56.528128 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:26:56.528131 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:26:56.528135 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:26:56.528139 | orchestrator | 2026-04-13 00:26:56.528142 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-04-13 00:26:56.528146 | orchestrator | Monday 13 April 2026 00:26:55 +0000 (0:00:00.737) 0:00:12.216 ********** 2026-04-13 00:26:56.528150 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:26:56.528153 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:26:56.528157 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:26:56.528161 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:26:56.528164 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:26:56.528168 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:26:56.528172 | orchestrator | ok: [testbed-manager] 2026-04-13 00:26:56.528175 | orchestrator | 2026-04-13 00:26:56.528179 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-13 00:26:56.528184 | orchestrator | Monday 13 April 2026 00:26:56 +0000 (0:00:00.445) 0:00:12.661 ********** 2026-04-13 00:26:56.528188 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:26:56.528191 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:26:56.528199 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:27:07.630185 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:27:07.630291 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:27:07.630306 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:27:07.630318 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:27:07.630329 | orchestrator | 2026-04-13 00:27:07.630341 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-13 00:27:07.630354 | orchestrator | Monday 13 April 2026 00:26:56 +0000 (0:00:00.210) 0:00:12.871 ********** 2026-04-13 00:27:07.630367 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:27:07.630442 | orchestrator | 2026-04-13 00:27:07.630455 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-13 00:27:07.630466 | orchestrator | Monday 13 April 2026 00:26:56 +0000 (0:00:00.321) 0:00:13.193 ********** 2026-04-13 00:27:07.630478 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:27:07.630512 | orchestrator | 2026-04-13 00:27:07.630524 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-13 00:27:07.630535 | orchestrator | Monday 13 April 2026 00:26:57 +0000 (0:00:00.319) 0:00:13.513 ********** 2026-04-13 00:27:07.630546 | orchestrator | ok: [testbed-manager] 2026-04-13 00:27:07.630558 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:27:07.630569 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:27:07.630579 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:27:07.630590 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:27:07.630600 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:27:07.630611 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:27:07.630622 | orchestrator | 2026-04-13 00:27:07.630633 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-13 00:27:07.630644 | orchestrator | Monday 13 April 2026 00:26:58 +0000 (0:00:01.289) 0:00:14.803 ********** 2026-04-13 00:27:07.630654 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:27:07.630665 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:27:07.630679 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:27:07.630692 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:27:07.630704 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:27:07.630717 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:27:07.630730 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:27:07.630743 | orchestrator | 2026-04-13 00:27:07.630755 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-13 00:27:07.630769 | orchestrator | Monday 13 April 2026 00:26:58 +0000 (0:00:00.196) 0:00:14.999 ********** 2026-04-13 00:27:07.630781 | orchestrator | ok: [testbed-manager] 2026-04-13 00:27:07.630802 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:27:07.630815 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:27:07.630828 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:27:07.630840 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:27:07.630853 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:27:07.630865 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:27:07.630877 | orchestrator | 2026-04-13 00:27:07.630890 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-13 00:27:07.630903 | orchestrator | Monday 13 April 2026 00:26:59 +0000 (0:00:00.534) 0:00:15.533 ********** 2026-04-13 00:27:07.630915 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:27:07.630928 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:27:07.630941 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:27:07.630953 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:27:07.630966 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:27:07.630979 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:27:07.630991 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:27:07.631004 | orchestrator | 2026-04-13 00:27:07.631017 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-13 00:27:07.631030 | orchestrator | Monday 13 April 2026 00:26:59 +0000 (0:00:00.205) 0:00:15.739 ********** 2026-04-13 00:27:07.631041 | orchestrator | ok: [testbed-manager] 2026-04-13 00:27:07.631052 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:27:07.631063 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:27:07.631073 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:27:07.631084 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:27:07.631095 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:27:07.631105 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:27:07.631116 | orchestrator | 2026-04-13 00:27:07.631127 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-13 00:27:07.631138 | orchestrator | Monday 13 April 2026 00:26:59 +0000 (0:00:00.481) 0:00:16.221 ********** 2026-04-13 00:27:07.631149 | orchestrator | ok: [testbed-manager] 2026-04-13 00:27:07.631159 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:27:07.631178 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:27:07.631189 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:27:07.631200 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:27:07.631210 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:27:07.631221 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:27:07.631232 | orchestrator | 2026-04-13 00:27:07.631243 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-13 00:27:07.631254 | orchestrator | Monday 13 April 2026 00:27:01 +0000 (0:00:01.052) 0:00:17.274 ********** 2026-04-13 00:27:07.631265 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:27:07.631276 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:27:07.631286 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:27:07.631297 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:27:07.631308 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:27:07.631319 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:27:07.631329 | orchestrator | ok: [testbed-manager] 2026-04-13 00:27:07.631340 | orchestrator | 2026-04-13 00:27:07.631351 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-13 00:27:07.631362 | orchestrator | Monday 13 April 2026 00:27:01 +0000 (0:00:00.956) 0:00:18.230 ********** 2026-04-13 00:27:07.631414 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:27:07.631427 | orchestrator | 2026-04-13 00:27:07.631438 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-13 00:27:07.631449 | orchestrator | Monday 13 April 2026 00:27:02 +0000 (0:00:00.316) 0:00:18.547 ********** 2026-04-13 00:27:07.631460 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:27:07.631471 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:27:07.631482 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:27:07.631492 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:27:07.631503 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:27:07.631514 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:27:07.631524 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:27:07.631535 | orchestrator | 2026-04-13 00:27:07.631546 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-13 00:27:07.631557 | orchestrator | Monday 13 April 2026 00:27:03 +0000 (0:00:01.212) 0:00:19.760 ********** 2026-04-13 00:27:07.631568 | orchestrator | ok: [testbed-manager] 2026-04-13 00:27:07.631578 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:27:07.631589 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:27:07.631600 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:27:07.631632 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:27:07.631643 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:27:07.631654 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:27:07.631666 | orchestrator | 2026-04-13 00:27:07.631676 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-13 00:27:07.631687 | orchestrator | Monday 13 April 2026 00:27:03 +0000 (0:00:00.234) 0:00:19.995 ********** 2026-04-13 00:27:07.631698 | orchestrator | ok: [testbed-manager] 2026-04-13 00:27:07.631709 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:27:07.631719 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:27:07.631730 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:27:07.631741 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:27:07.631751 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:27:07.631762 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:27:07.631772 | orchestrator | 2026-04-13 00:27:07.631783 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-13 00:27:07.631794 | orchestrator | Monday 13 April 2026 00:27:03 +0000 (0:00:00.219) 0:00:20.214 ********** 2026-04-13 00:27:07.631805 | orchestrator | ok: [testbed-manager] 2026-04-13 00:27:07.631815 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:27:07.631826 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:27:07.631845 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:27:07.631856 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:27:07.631866 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:27:07.631877 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:27:07.631888 | orchestrator | 2026-04-13 00:27:07.631899 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-13 00:27:07.631910 | orchestrator | Monday 13 April 2026 00:27:04 +0000 (0:00:00.216) 0:00:20.431 ********** 2026-04-13 00:27:07.631922 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:27:07.631951 | orchestrator | 2026-04-13 00:27:07.631963 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-13 00:27:07.631974 | orchestrator | Monday 13 April 2026 00:27:04 +0000 (0:00:00.272) 0:00:20.703 ********** 2026-04-13 00:27:07.631993 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:27:07.632004 | orchestrator | ok: [testbed-manager] 2026-04-13 00:27:07.632015 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:27:07.632026 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:27:07.632036 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:27:07.632047 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:27:07.632058 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:27:07.632069 | orchestrator | 2026-04-13 00:27:07.632079 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-13 00:27:07.632090 | orchestrator | Monday 13 April 2026 00:27:04 +0000 (0:00:00.507) 0:00:21.211 ********** 2026-04-13 00:27:07.632101 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:27:07.632112 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:27:07.632123 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:27:07.632134 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:27:07.632145 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:27:07.632156 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:27:07.632167 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:27:07.632178 | orchestrator | 2026-04-13 00:27:07.632188 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-13 00:27:07.632199 | orchestrator | Monday 13 April 2026 00:27:05 +0000 (0:00:00.217) 0:00:21.429 ********** 2026-04-13 00:27:07.632210 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:27:07.632221 | orchestrator | ok: [testbed-manager] 2026-04-13 00:27:07.632232 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:27:07.632243 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:27:07.632253 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:27:07.632264 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:27:07.632275 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:27:07.632286 | orchestrator | 2026-04-13 00:27:07.632296 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-13 00:27:07.632307 | orchestrator | Monday 13 April 2026 00:27:06 +0000 (0:00:00.951) 0:00:22.380 ********** 2026-04-13 00:27:07.632318 | orchestrator | ok: [testbed-manager] 2026-04-13 00:27:07.632329 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:27:07.632339 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:27:07.632350 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:27:07.632361 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:27:07.632371 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:27:07.632382 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:27:07.632412 | orchestrator | 2026-04-13 00:27:07.632424 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-13 00:27:07.632434 | orchestrator | Monday 13 April 2026 00:27:06 +0000 (0:00:00.580) 0:00:22.961 ********** 2026-04-13 00:27:07.632445 | orchestrator | ok: [testbed-manager] 2026-04-13 00:27:07.632456 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:27:07.632467 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:27:07.632477 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:27:07.632495 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:27:50.396652 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:27:50.396752 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:27:50.396768 | orchestrator | 2026-04-13 00:27:50.396780 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-13 00:27:50.396792 | orchestrator | Monday 13 April 2026 00:27:07 +0000 (0:00:00.975) 0:00:23.937 ********** 2026-04-13 00:27:50.396813 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:27:50.396824 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:27:50.396833 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:27:50.396843 | orchestrator | changed: [testbed-manager] 2026-04-13 00:27:50.396853 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:27:50.396863 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:27:50.396872 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:27:50.396882 | orchestrator | 2026-04-13 00:27:50.396892 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-04-13 00:27:50.396909 | orchestrator | Monday 13 April 2026 00:27:24 +0000 (0:00:16.828) 0:00:40.765 ********** 2026-04-13 00:27:50.396924 | orchestrator | ok: [testbed-manager] 2026-04-13 00:27:50.396934 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:27:50.396944 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:27:50.396953 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:27:50.396973 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:27:50.396991 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:27:50.397001 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:27:50.397010 | orchestrator | 2026-04-13 00:27:50.397020 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-04-13 00:27:50.397030 | orchestrator | Monday 13 April 2026 00:27:24 +0000 (0:00:00.239) 0:00:41.005 ********** 2026-04-13 00:27:50.397039 | orchestrator | ok: [testbed-manager] 2026-04-13 00:27:50.397057 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:27:50.397068 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:27:50.397078 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:27:50.397087 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:27:50.397097 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:27:50.397106 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:27:50.397116 | orchestrator | 2026-04-13 00:27:50.397126 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-04-13 00:27:50.397136 | orchestrator | Monday 13 April 2026 00:27:25 +0000 (0:00:00.270) 0:00:41.275 ********** 2026-04-13 00:27:50.397145 | orchestrator | ok: [testbed-manager] 2026-04-13 00:27:50.397155 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:27:50.397165 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:27:50.397174 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:27:50.397184 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:27:50.397194 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:27:50.397205 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:27:50.397217 | orchestrator | 2026-04-13 00:27:50.397227 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-04-13 00:27:50.397239 | orchestrator | Monday 13 April 2026 00:27:25 +0000 (0:00:00.245) 0:00:41.520 ********** 2026-04-13 00:27:50.397274 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:27:50.397288 | orchestrator | 2026-04-13 00:27:50.397298 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-04-13 00:27:50.397307 | orchestrator | Monday 13 April 2026 00:27:25 +0000 (0:00:00.289) 0:00:41.810 ********** 2026-04-13 00:27:50.397317 | orchestrator | ok: [testbed-manager] 2026-04-13 00:27:50.397327 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:27:50.397336 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:27:50.397345 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:27:50.397372 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:27:50.397382 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:27:50.397392 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:27:50.397402 | orchestrator | 2026-04-13 00:27:50.397441 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-04-13 00:27:50.397451 | orchestrator | Monday 13 April 2026 00:27:27 +0000 (0:00:01.937) 0:00:43.748 ********** 2026-04-13 00:27:50.397461 | orchestrator | changed: [testbed-manager] 2026-04-13 00:27:50.397471 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:27:50.397480 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:27:50.397489 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:27:50.397499 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:27:50.397508 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:27:50.397518 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:27:50.397527 | orchestrator | 2026-04-13 00:27:50.397537 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-04-13 00:27:50.397546 | orchestrator | Monday 13 April 2026 00:27:28 +0000 (0:00:01.132) 0:00:44.880 ********** 2026-04-13 00:27:50.397556 | orchestrator | ok: [testbed-manager] 2026-04-13 00:27:50.397565 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:27:50.397575 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:27:50.397584 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:27:50.397594 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:27:50.397603 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:27:50.397612 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:27:50.397622 | orchestrator | 2026-04-13 00:27:50.397631 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-04-13 00:27:50.397641 | orchestrator | Monday 13 April 2026 00:27:29 +0000 (0:00:00.875) 0:00:45.755 ********** 2026-04-13 00:27:50.397651 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:27:50.397662 | orchestrator | 2026-04-13 00:27:50.397674 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-04-13 00:27:50.397690 | orchestrator | Monday 13 April 2026 00:27:29 +0000 (0:00:00.350) 0:00:46.106 ********** 2026-04-13 00:27:50.397700 | orchestrator | changed: [testbed-manager] 2026-04-13 00:27:50.397710 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:27:50.397719 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:27:50.397729 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:27:50.397738 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:27:50.397748 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:27:50.397757 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:27:50.397767 | orchestrator | 2026-04-13 00:27:50.397791 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-04-13 00:27:50.397801 | orchestrator | Monday 13 April 2026 00:27:30 +0000 (0:00:01.077) 0:00:47.184 ********** 2026-04-13 00:27:50.397811 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:27:50.397820 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:27:50.397830 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:27:50.397839 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:27:50.397848 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:27:50.397858 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:27:50.397867 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:27:50.397876 | orchestrator | 2026-04-13 00:27:50.397886 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-04-13 00:27:50.397895 | orchestrator | Monday 13 April 2026 00:27:31 +0000 (0:00:00.268) 0:00:47.453 ********** 2026-04-13 00:27:50.397905 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:27:50.397915 | orchestrator | 2026-04-13 00:27:50.397925 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-04-13 00:27:50.397934 | orchestrator | Monday 13 April 2026 00:27:31 +0000 (0:00:00.325) 0:00:47.778 ********** 2026-04-13 00:27:50.397943 | orchestrator | ok: [testbed-manager] 2026-04-13 00:27:50.397960 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:27:50.397970 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:27:50.397979 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:27:50.397989 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:27:50.397998 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:27:50.398008 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:27:50.398074 | orchestrator | 2026-04-13 00:27:50.398086 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-04-13 00:27:50.398095 | orchestrator | Monday 13 April 2026 00:27:33 +0000 (0:00:01.723) 0:00:49.501 ********** 2026-04-13 00:27:50.398105 | orchestrator | changed: [testbed-manager] 2026-04-13 00:27:50.398115 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:27:50.398124 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:27:50.398134 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:27:50.398143 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:27:50.398153 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:27:50.398162 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:27:50.398172 | orchestrator | 2026-04-13 00:27:50.398181 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-04-13 00:27:50.398194 | orchestrator | Monday 13 April 2026 00:27:34 +0000 (0:00:01.228) 0:00:50.730 ********** 2026-04-13 00:27:50.398209 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:27:50.398218 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:27:50.398228 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:27:50.398237 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:27:50.398247 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:27:50.398262 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:27:50.398272 | orchestrator | changed: [testbed-manager] 2026-04-13 00:27:50.398281 | orchestrator | 2026-04-13 00:27:50.398290 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-04-13 00:27:50.398300 | orchestrator | Monday 13 April 2026 00:27:47 +0000 (0:00:13.199) 0:01:03.929 ********** 2026-04-13 00:27:50.398309 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:27:50.398318 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:27:50.398328 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:27:50.398337 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:27:50.398347 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:27:50.398384 | orchestrator | ok: [testbed-manager] 2026-04-13 00:27:50.398395 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:27:50.398404 | orchestrator | 2026-04-13 00:27:50.398414 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-04-13 00:27:50.398424 | orchestrator | Monday 13 April 2026 00:27:48 +0000 (0:00:01.002) 0:01:04.931 ********** 2026-04-13 00:27:50.398433 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:27:50.398442 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:27:50.398452 | orchestrator | ok: [testbed-manager] 2026-04-13 00:27:50.398461 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:27:50.398470 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:27:50.398480 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:27:50.398489 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:27:50.398498 | orchestrator | 2026-04-13 00:27:50.398508 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-04-13 00:27:50.398517 | orchestrator | Monday 13 April 2026 00:27:49 +0000 (0:00:00.896) 0:01:05.828 ********** 2026-04-13 00:27:50.398527 | orchestrator | ok: [testbed-manager] 2026-04-13 00:27:50.398536 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:27:50.398550 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:27:50.398564 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:27:50.398573 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:27:50.398583 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:27:50.398592 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:27:50.398601 | orchestrator | 2026-04-13 00:27:50.398611 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-04-13 00:27:50.398620 | orchestrator | Monday 13 April 2026 00:27:49 +0000 (0:00:00.237) 0:01:06.065 ********** 2026-04-13 00:27:50.398636 | orchestrator | ok: [testbed-manager] 2026-04-13 00:27:50.398646 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:27:50.398655 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:27:50.398666 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:27:50.398683 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:27:50.398692 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:27:50.398702 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:27:50.398711 | orchestrator | 2026-04-13 00:27:50.398721 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-04-13 00:27:50.398731 | orchestrator | Monday 13 April 2026 00:27:50 +0000 (0:00:00.236) 0:01:06.301 ********** 2026-04-13 00:27:50.398741 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:27:50.398751 | orchestrator | 2026-04-13 00:27:50.398767 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-04-13 00:30:11.913402 | orchestrator | Monday 13 April 2026 00:27:50 +0000 (0:00:00.337) 0:01:06.639 ********** 2026-04-13 00:30:11.913532 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:30:11.913557 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:30:11.913567 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:30:11.913577 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:30:11.913587 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:30:11.913597 | orchestrator | ok: [testbed-manager] 2026-04-13 00:30:11.913614 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:30:11.913630 | orchestrator | 2026-04-13 00:30:11.913646 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-04-13 00:30:11.913663 | orchestrator | Monday 13 April 2026 00:27:51 +0000 (0:00:01.509) 0:01:08.149 ********** 2026-04-13 00:30:11.913679 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:30:11.913696 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:30:11.913712 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:30:11.913729 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:30:11.913745 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:30:11.913761 | orchestrator | changed: [testbed-manager] 2026-04-13 00:30:11.913776 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:30:11.913793 | orchestrator | 2026-04-13 00:30:11.913809 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-04-13 00:30:11.913827 | orchestrator | Monday 13 April 2026 00:27:52 +0000 (0:00:00.531) 0:01:08.680 ********** 2026-04-13 00:30:11.913844 | orchestrator | ok: [testbed-manager] 2026-04-13 00:30:11.913862 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:30:11.913879 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:30:11.913895 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:30:11.913910 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:30:11.913924 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:30:11.913939 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:30:11.913955 | orchestrator | 2026-04-13 00:30:11.913969 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-04-13 00:30:11.913985 | orchestrator | Monday 13 April 2026 00:27:52 +0000 (0:00:00.246) 0:01:08.926 ********** 2026-04-13 00:30:11.914000 | orchestrator | ok: [testbed-manager] 2026-04-13 00:30:11.914014 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:30:11.914102 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:30:11.914118 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:30:11.914134 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:30:11.914151 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:30:11.914208 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:30:11.914226 | orchestrator | 2026-04-13 00:30:11.914271 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-04-13 00:30:11.914288 | orchestrator | Monday 13 April 2026 00:27:54 +0000 (0:00:01.342) 0:01:10.269 ********** 2026-04-13 00:30:11.914302 | orchestrator | changed: [testbed-manager] 2026-04-13 00:30:11.914323 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:30:11.914372 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:30:11.914388 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:30:11.914403 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:30:11.914418 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:30:11.914433 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:30:11.914449 | orchestrator | 2026-04-13 00:30:11.914465 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-04-13 00:30:11.914480 | orchestrator | Monday 13 April 2026 00:27:56 +0000 (0:00:02.010) 0:01:12.279 ********** 2026-04-13 00:30:11.914495 | orchestrator | ok: [testbed-manager] 2026-04-13 00:30:11.914510 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:30:11.914526 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:30:11.914540 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:30:11.914557 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:30:11.914573 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:30:11.914589 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:30:11.914605 | orchestrator | 2026-04-13 00:30:11.914621 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-04-13 00:30:11.914636 | orchestrator | Monday 13 April 2026 00:27:58 +0000 (0:00:02.825) 0:01:15.104 ********** 2026-04-13 00:30:11.914652 | orchestrator | ok: [testbed-manager] 2026-04-13 00:30:11.914667 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:30:11.914683 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:30:11.914700 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:30:11.914715 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:30:11.914731 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:30:11.914747 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:30:11.914762 | orchestrator | 2026-04-13 00:30:11.914777 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-04-13 00:30:11.914793 | orchestrator | Monday 13 April 2026 00:28:37 +0000 (0:00:38.863) 0:01:53.967 ********** 2026-04-13 00:30:11.914810 | orchestrator | changed: [testbed-manager] 2026-04-13 00:30:11.914826 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:30:11.914842 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:30:11.914859 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:30:11.914875 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:30:11.914891 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:30:11.914907 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:30:11.914924 | orchestrator | 2026-04-13 00:30:11.914961 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-04-13 00:30:11.914973 | orchestrator | Monday 13 April 2026 00:29:56 +0000 (0:01:19.226) 0:03:13.193 ********** 2026-04-13 00:30:11.914983 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:30:11.914992 | orchestrator | ok: [testbed-manager] 2026-04-13 00:30:11.915002 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:30:11.915012 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:30:11.915021 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:30:11.915031 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:30:11.915040 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:30:11.915050 | orchestrator | 2026-04-13 00:30:11.915059 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-04-13 00:30:11.915069 | orchestrator | Monday 13 April 2026 00:29:58 +0000 (0:00:01.673) 0:03:14.867 ********** 2026-04-13 00:30:11.915079 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:30:11.915088 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:30:11.915098 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:30:11.915107 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:30:11.915117 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:30:11.915126 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:30:11.915136 | orchestrator | changed: [testbed-manager] 2026-04-13 00:30:11.915145 | orchestrator | 2026-04-13 00:30:11.915155 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-04-13 00:30:11.915167 | orchestrator | Monday 13 April 2026 00:30:10 +0000 (0:00:12.235) 0:03:27.103 ********** 2026-04-13 00:30:11.915227 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-04-13 00:30:11.915302 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-04-13 00:30:11.915322 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-04-13 00:30:11.915338 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-13 00:30:11.915358 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-13 00:30:11.915375 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-04-13 00:30:11.915392 | orchestrator | 2026-04-13 00:30:11.915408 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-04-13 00:30:11.915425 | orchestrator | Monday 13 April 2026 00:30:11 +0000 (0:00:00.393) 0:03:27.496 ********** 2026-04-13 00:30:11.915441 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-13 00:30:11.915459 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:30:11.915475 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-13 00:30:11.915491 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:30:11.915501 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-13 00:30:11.915511 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:30:11.915520 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-13 00:30:11.915530 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:30:11.915539 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-13 00:30:11.915549 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-13 00:30:11.915558 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-13 00:30:11.915568 | orchestrator | 2026-04-13 00:30:11.915581 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-04-13 00:30:11.915596 | orchestrator | Monday 13 April 2026 00:30:11 +0000 (0:00:00.593) 0:03:28.090 ********** 2026-04-13 00:30:11.915621 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-13 00:30:11.915638 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-13 00:30:11.915653 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-13 00:30:11.915668 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-13 00:30:11.915684 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-13 00:30:11.915714 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-13 00:30:17.774456 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-13 00:30:17.774566 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-13 00:30:17.774584 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-13 00:30:17.774597 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-13 00:30:17.774609 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:30:17.774623 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-13 00:30:17.774634 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-13 00:30:17.774645 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-13 00:30:17.774656 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-13 00:30:17.774667 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-13 00:30:17.774678 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-13 00:30:17.774689 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-13 00:30:17.774699 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-13 00:30:17.774710 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-13 00:30:17.774721 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-13 00:30:17.774732 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-13 00:30:17.774743 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-13 00:30:17.774753 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-13 00:30:17.774764 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-13 00:30:17.774775 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-13 00:30:17.774803 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-13 00:30:17.774814 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-13 00:30:17.774825 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-13 00:30:17.774836 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-13 00:30:17.774847 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-13 00:30:17.774858 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:30:17.774869 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-13 00:30:17.774903 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-13 00:30:17.774915 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:30:17.774927 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-13 00:30:17.774937 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-13 00:30:17.774948 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-13 00:30:17.774959 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-13 00:30:17.774969 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-13 00:30:17.774980 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-13 00:30:17.774991 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-13 00:30:17.775002 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-13 00:30:17.775012 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:30:17.775023 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-13 00:30:17.775034 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-13 00:30:17.775044 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-13 00:30:17.775055 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-13 00:30:17.775065 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-13 00:30:17.775094 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-13 00:30:17.775106 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-13 00:30:17.775117 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-13 00:30:17.775128 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-13 00:30:17.775138 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-13 00:30:17.775149 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-13 00:30:17.775159 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-13 00:30:17.775170 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-13 00:30:17.775180 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-13 00:30:17.775200 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-13 00:30:17.775220 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-13 00:30:17.775293 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-13 00:30:17.775315 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-13 00:30:17.775335 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-13 00:30:17.775355 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-13 00:30:17.775374 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-13 00:30:17.775393 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-13 00:30:17.775425 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-13 00:30:17.775445 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-13 00:30:17.775464 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-13 00:30:17.775483 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-13 00:30:17.775501 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-13 00:30:17.775521 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-13 00:30:17.775539 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-13 00:30:17.775554 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-13 00:30:17.775581 | orchestrator | 2026-04-13 00:30:17.775602 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-04-13 00:30:17.775620 | orchestrator | Monday 13 April 2026 00:30:16 +0000 (0:00:04.712) 0:03:32.802 ********** 2026-04-13 00:30:17.775637 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-13 00:30:17.775654 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-13 00:30:17.775673 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-13 00:30:17.775691 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-13 00:30:17.775709 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-13 00:30:17.775728 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-13 00:30:17.775746 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-13 00:30:17.775763 | orchestrator | 2026-04-13 00:30:17.775782 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-04-13 00:30:17.775798 | orchestrator | Monday 13 April 2026 00:30:17 +0000 (0:00:00.605) 0:03:33.407 ********** 2026-04-13 00:30:17.775809 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-13 00:30:17.775820 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:30:17.775832 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-13 00:30:17.775842 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-13 00:30:17.775853 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:30:17.775863 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-13 00:30:17.775874 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:30:17.775884 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:30:17.775895 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-13 00:30:17.775906 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-13 00:30:17.775929 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-13 00:30:31.365667 | orchestrator | 2026-04-13 00:30:31.365781 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-04-13 00:30:31.365800 | orchestrator | Monday 13 April 2026 00:30:17 +0000 (0:00:00.648) 0:03:34.056 ********** 2026-04-13 00:30:31.365814 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-13 00:30:31.365827 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:30:31.365865 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-13 00:30:31.365876 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:30:31.365886 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-13 00:30:31.365897 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:30:31.365908 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-13 00:30:31.365919 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:30:31.365931 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-13 00:30:31.365943 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-13 00:30:31.365956 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-13 00:30:31.365967 | orchestrator | 2026-04-13 00:30:31.365977 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-04-13 00:30:31.365987 | orchestrator | Monday 13 April 2026 00:30:18 +0000 (0:00:00.517) 0:03:34.573 ********** 2026-04-13 00:30:31.365999 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-13 00:30:31.366010 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-13 00:30:31.366112 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:30:31.366128 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-13 00:30:31.366140 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:30:31.366152 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-13 00:30:31.366164 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:30:31.366174 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:30:31.366190 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-13 00:30:31.366202 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-13 00:30:31.366215 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-13 00:30:31.366246 | orchestrator | 2026-04-13 00:30:31.366259 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-04-13 00:30:31.366272 | orchestrator | Monday 13 April 2026 00:30:19 +0000 (0:00:00.695) 0:03:35.268 ********** 2026-04-13 00:30:31.366284 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:30:31.366296 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:30:31.366308 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:30:31.366319 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:30:31.366332 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:30:31.366344 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:30:31.366356 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:30:31.366367 | orchestrator | 2026-04-13 00:30:31.366380 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-04-13 00:30:31.366392 | orchestrator | Monday 13 April 2026 00:30:19 +0000 (0:00:00.331) 0:03:35.600 ********** 2026-04-13 00:30:31.366405 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:30:31.366418 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:30:31.366431 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:30:31.366442 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:30:31.366455 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:30:31.366466 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:30:31.366477 | orchestrator | ok: [testbed-manager] 2026-04-13 00:30:31.366489 | orchestrator | 2026-04-13 00:30:31.366502 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-04-13 00:30:31.366514 | orchestrator | Monday 13 April 2026 00:30:25 +0000 (0:00:06.459) 0:03:42.060 ********** 2026-04-13 00:30:31.366540 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-04-13 00:30:31.366555 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-04-13 00:30:31.366566 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:30:31.366578 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:30:31.366589 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-04-13 00:30:31.366601 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-04-13 00:30:31.366611 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:30:31.366622 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:30:31.366634 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-04-13 00:30:31.366644 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-04-13 00:30:31.366655 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:30:31.366666 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:30:31.366677 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-04-13 00:30:31.366688 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:30:31.366699 | orchestrator | 2026-04-13 00:30:31.366710 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-04-13 00:30:31.366721 | orchestrator | Monday 13 April 2026 00:30:26 +0000 (0:00:00.315) 0:03:42.375 ********** 2026-04-13 00:30:31.366732 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-04-13 00:30:31.366743 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-04-13 00:30:31.366754 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-04-13 00:30:31.366789 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-04-13 00:30:31.366802 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-04-13 00:30:31.366813 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-04-13 00:30:31.366824 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-04-13 00:30:31.366835 | orchestrator | 2026-04-13 00:30:31.366846 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-04-13 00:30:31.366857 | orchestrator | Monday 13 April 2026 00:30:27 +0000 (0:00:01.145) 0:03:43.520 ********** 2026-04-13 00:30:31.366872 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:30:31.366886 | orchestrator | 2026-04-13 00:30:31.366897 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-04-13 00:30:31.366908 | orchestrator | Monday 13 April 2026 00:30:27 +0000 (0:00:00.401) 0:03:43.922 ********** 2026-04-13 00:30:31.366919 | orchestrator | ok: [testbed-manager] 2026-04-13 00:30:31.366930 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:30:31.366942 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:30:31.366954 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:30:31.366965 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:30:31.366976 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:30:31.366987 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:30:31.366998 | orchestrator | 2026-04-13 00:30:31.367008 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-04-13 00:30:31.367019 | orchestrator | Monday 13 April 2026 00:30:28 +0000 (0:00:01.322) 0:03:45.245 ********** 2026-04-13 00:30:31.367030 | orchestrator | ok: [testbed-manager] 2026-04-13 00:30:31.367041 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:30:31.367053 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:30:31.367064 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:30:31.367074 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:30:31.367086 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:30:31.367097 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:30:31.367108 | orchestrator | 2026-04-13 00:30:31.367119 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-04-13 00:30:31.367131 | orchestrator | Monday 13 April 2026 00:30:29 +0000 (0:00:00.637) 0:03:45.882 ********** 2026-04-13 00:30:31.367143 | orchestrator | changed: [testbed-manager] 2026-04-13 00:30:31.367155 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:30:31.367176 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:30:31.367187 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:30:31.367198 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:30:31.367209 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:30:31.367221 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:30:31.367256 | orchestrator | 2026-04-13 00:30:31.367273 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-04-13 00:30:31.367285 | orchestrator | Monday 13 April 2026 00:30:30 +0000 (0:00:00.626) 0:03:46.509 ********** 2026-04-13 00:30:31.367295 | orchestrator | ok: [testbed-manager] 2026-04-13 00:30:31.367306 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:30:31.367317 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:30:31.367329 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:30:31.367340 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:30:31.367352 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:30:31.367364 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:30:31.367375 | orchestrator | 2026-04-13 00:30:31.367386 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-04-13 00:30:31.367398 | orchestrator | Monday 13 April 2026 00:30:30 +0000 (0:00:00.584) 0:03:47.094 ********** 2026-04-13 00:30:31.367415 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776038713.3243284, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 00:30:31.367432 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776038696.3562772, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 00:30:31.367445 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776038714.1525276, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 00:30:31.367483 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776038713.0423424, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 00:30:37.075486 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776038715.8333805, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 00:30:37.075611 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776038718.9279938, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 00:30:37.075644 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776038719.5890272, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 00:30:37.075657 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 00:30:37.075669 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 00:30:37.075680 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 00:30:37.075691 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 00:30:37.075730 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 00:30:37.075750 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 00:30:37.075763 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-13 00:30:37.075775 | orchestrator | 2026-04-13 00:30:37.075789 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-04-13 00:30:37.075801 | orchestrator | Monday 13 April 2026 00:30:31 +0000 (0:00:00.978) 0:03:48.073 ********** 2026-04-13 00:30:37.075813 | orchestrator | changed: [testbed-manager] 2026-04-13 00:30:37.075825 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:30:37.075836 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:30:37.075847 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:30:37.075857 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:30:37.075868 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:30:37.075878 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:30:37.075889 | orchestrator | 2026-04-13 00:30:37.075901 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-04-13 00:30:37.075912 | orchestrator | Monday 13 April 2026 00:30:33 +0000 (0:00:01.202) 0:03:49.276 ********** 2026-04-13 00:30:37.075923 | orchestrator | changed: [testbed-manager] 2026-04-13 00:30:37.075933 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:30:37.075944 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:30:37.075955 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:30:37.075965 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:30:37.075976 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:30:37.075987 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:30:37.075997 | orchestrator | 2026-04-13 00:30:37.076008 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-04-13 00:30:37.076022 | orchestrator | Monday 13 April 2026 00:30:34 +0000 (0:00:01.170) 0:03:50.446 ********** 2026-04-13 00:30:37.076034 | orchestrator | changed: [testbed-manager] 2026-04-13 00:30:37.076047 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:30:37.076060 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:30:37.076072 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:30:37.076084 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:30:37.076097 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:30:37.076109 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:30:37.076122 | orchestrator | 2026-04-13 00:30:37.076135 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-04-13 00:30:37.076148 | orchestrator | Monday 13 April 2026 00:30:35 +0000 (0:00:01.271) 0:03:51.717 ********** 2026-04-13 00:30:37.076161 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:30:37.076175 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:30:37.076187 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:30:37.076200 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:30:37.076212 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:30:37.076250 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:30:37.076269 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:30:37.076282 | orchestrator | 2026-04-13 00:30:37.076295 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-04-13 00:30:37.076308 | orchestrator | Monday 13 April 2026 00:30:35 +0000 (0:00:00.295) 0:03:52.013 ********** 2026-04-13 00:30:37.076321 | orchestrator | ok: [testbed-manager] 2026-04-13 00:30:37.076335 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:30:37.076348 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:30:37.076360 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:30:37.076373 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:30:37.076386 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:30:37.076396 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:30:37.076407 | orchestrator | 2026-04-13 00:30:37.076418 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-04-13 00:30:37.076429 | orchestrator | Monday 13 April 2026 00:30:36 +0000 (0:00:00.825) 0:03:52.838 ********** 2026-04-13 00:30:37.076442 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:30:37.076455 | orchestrator | 2026-04-13 00:30:37.076466 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-04-13 00:30:37.076485 | orchestrator | Monday 13 April 2026 00:30:37 +0000 (0:00:00.481) 0:03:53.320 ********** 2026-04-13 00:31:53.524362 | orchestrator | ok: [testbed-manager] 2026-04-13 00:31:53.524454 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:31:53.524464 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:31:53.524472 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:31:53.524478 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:31:53.524484 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:31:53.524491 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:31:53.524497 | orchestrator | 2026-04-13 00:31:53.524504 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-04-13 00:31:53.524512 | orchestrator | Monday 13 April 2026 00:30:45 +0000 (0:00:08.040) 0:04:01.360 ********** 2026-04-13 00:31:53.524518 | orchestrator | ok: [testbed-manager] 2026-04-13 00:31:53.524524 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:31:53.524530 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:31:53.524537 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:31:53.524543 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:31:53.524549 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:31:53.524555 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:31:53.524561 | orchestrator | 2026-04-13 00:31:53.524567 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-04-13 00:31:53.524573 | orchestrator | Monday 13 April 2026 00:30:46 +0000 (0:00:01.306) 0:04:02.667 ********** 2026-04-13 00:31:53.524579 | orchestrator | ok: [testbed-manager] 2026-04-13 00:31:53.524586 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:31:53.524592 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:31:53.524598 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:31:53.524603 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:31:53.524610 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:31:53.524616 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:31:53.524623 | orchestrator | 2026-04-13 00:31:53.524629 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-04-13 00:31:53.524635 | orchestrator | Monday 13 April 2026 00:30:47 +0000 (0:00:01.021) 0:04:03.688 ********** 2026-04-13 00:31:53.524641 | orchestrator | ok: [testbed-manager] 2026-04-13 00:31:53.524662 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:31:53.524669 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:31:53.524678 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:31:53.524684 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:31:53.524690 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:31:53.524696 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:31:53.524702 | orchestrator | 2026-04-13 00:31:53.524708 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-04-13 00:31:53.524730 | orchestrator | Monday 13 April 2026 00:30:47 +0000 (0:00:00.294) 0:04:03.983 ********** 2026-04-13 00:31:53.524737 | orchestrator | ok: [testbed-manager] 2026-04-13 00:31:53.524743 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:31:53.524749 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:31:53.524755 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:31:53.524761 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:31:53.524767 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:31:53.524773 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:31:53.524778 | orchestrator | 2026-04-13 00:31:53.524785 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-04-13 00:31:53.524791 | orchestrator | Monday 13 April 2026 00:30:48 +0000 (0:00:00.311) 0:04:04.294 ********** 2026-04-13 00:31:53.524797 | orchestrator | ok: [testbed-manager] 2026-04-13 00:31:53.524803 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:31:53.524809 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:31:53.524815 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:31:53.524821 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:31:53.524827 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:31:53.524833 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:31:53.524839 | orchestrator | 2026-04-13 00:31:53.524845 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-04-13 00:31:53.524851 | orchestrator | Monday 13 April 2026 00:30:48 +0000 (0:00:00.294) 0:04:04.589 ********** 2026-04-13 00:31:53.524857 | orchestrator | ok: [testbed-manager] 2026-04-13 00:31:53.524863 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:31:53.524869 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:31:53.524875 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:31:53.524881 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:31:53.524887 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:31:53.524893 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:31:53.524899 | orchestrator | 2026-04-13 00:31:53.524905 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-04-13 00:31:53.524911 | orchestrator | Monday 13 April 2026 00:30:54 +0000 (0:00:05.706) 0:04:10.295 ********** 2026-04-13 00:31:53.524919 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:31:53.524929 | orchestrator | 2026-04-13 00:31:53.524936 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-04-13 00:31:53.524943 | orchestrator | Monday 13 April 2026 00:30:54 +0000 (0:00:00.415) 0:04:10.711 ********** 2026-04-13 00:31:53.524950 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-04-13 00:31:53.524957 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-04-13 00:31:53.524965 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-04-13 00:31:53.524972 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-04-13 00:31:53.524979 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:31:53.524987 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-04-13 00:31:53.524994 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-04-13 00:31:53.525001 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:31:53.525008 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:31:53.525015 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-04-13 00:31:53.525023 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-04-13 00:31:53.525029 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:31:53.525037 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-04-13 00:31:53.525044 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-04-13 00:31:53.525051 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-04-13 00:31:53.525058 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:31:53.525082 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-04-13 00:31:53.525090 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:31:53.525098 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-04-13 00:31:53.525105 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-04-13 00:31:53.525112 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:31:53.525119 | orchestrator | 2026-04-13 00:31:53.525126 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-04-13 00:31:53.525133 | orchestrator | Monday 13 April 2026 00:30:54 +0000 (0:00:00.379) 0:04:11.091 ********** 2026-04-13 00:31:53.525140 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:31:53.525148 | orchestrator | 2026-04-13 00:31:53.525155 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-04-13 00:31:53.525162 | orchestrator | Monday 13 April 2026 00:30:55 +0000 (0:00:00.502) 0:04:11.594 ********** 2026-04-13 00:31:53.525186 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-04-13 00:31:53.525193 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-04-13 00:31:53.525200 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:31:53.525207 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:31:53.525214 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-04-13 00:31:53.525221 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-04-13 00:31:53.525228 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:31:53.525235 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:31:53.525245 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-04-13 00:31:53.525253 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-04-13 00:31:53.525260 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:31:53.525267 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:31:53.525274 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-04-13 00:31:53.525281 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:31:53.525288 | orchestrator | 2026-04-13 00:31:53.525295 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-04-13 00:31:53.525303 | orchestrator | Monday 13 April 2026 00:30:55 +0000 (0:00:00.318) 0:04:11.913 ********** 2026-04-13 00:31:53.525310 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:31:53.525316 | orchestrator | 2026-04-13 00:31:53.525322 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-04-13 00:31:53.525329 | orchestrator | Monday 13 April 2026 00:30:56 +0000 (0:00:00.409) 0:04:12.322 ********** 2026-04-13 00:31:53.525335 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:31:53.525341 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:31:53.525347 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:31:53.525353 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:31:53.525359 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:31:53.525374 | orchestrator | changed: [testbed-manager] 2026-04-13 00:31:53.525380 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:31:53.525386 | orchestrator | 2026-04-13 00:31:53.525392 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-04-13 00:31:53.525398 | orchestrator | Monday 13 April 2026 00:31:30 +0000 (0:00:33.986) 0:04:46.309 ********** 2026-04-13 00:31:53.525404 | orchestrator | changed: [testbed-manager] 2026-04-13 00:31:53.525410 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:31:53.525416 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:31:53.525428 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:31:53.525434 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:31:53.525440 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:31:53.525445 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:31:53.525451 | orchestrator | 2026-04-13 00:31:53.525458 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-04-13 00:31:53.525464 | orchestrator | Monday 13 April 2026 00:31:38 +0000 (0:00:08.661) 0:04:54.971 ********** 2026-04-13 00:31:53.525470 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:31:53.525476 | orchestrator | changed: [testbed-manager] 2026-04-13 00:31:53.525482 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:31:53.525488 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:31:53.525494 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:31:53.525500 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:31:53.525505 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:31:53.525511 | orchestrator | 2026-04-13 00:31:53.525518 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-04-13 00:31:53.525524 | orchestrator | Monday 13 April 2026 00:31:46 +0000 (0:00:07.544) 0:05:02.515 ********** 2026-04-13 00:31:53.525530 | orchestrator | ok: [testbed-manager] 2026-04-13 00:31:53.525536 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:31:53.525542 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:31:53.525548 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:31:53.525554 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:31:53.525560 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:31:53.525566 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:31:53.525572 | orchestrator | 2026-04-13 00:31:53.525578 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-04-13 00:31:53.525584 | orchestrator | Monday 13 April 2026 00:31:47 +0000 (0:00:01.699) 0:05:04.215 ********** 2026-04-13 00:31:53.525590 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:31:53.525596 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:31:53.525602 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:31:53.525608 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:31:53.525614 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:31:53.525620 | orchestrator | changed: [testbed-manager] 2026-04-13 00:31:53.525626 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:31:53.525632 | orchestrator | 2026-04-13 00:31:53.525642 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-04-13 00:32:04.824862 | orchestrator | Monday 13 April 2026 00:31:53 +0000 (0:00:05.551) 0:05:09.767 ********** 2026-04-13 00:32:04.824997 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:32:04.825023 | orchestrator | 2026-04-13 00:32:04.825043 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-04-13 00:32:04.825062 | orchestrator | Monday 13 April 2026 00:31:53 +0000 (0:00:00.443) 0:05:10.210 ********** 2026-04-13 00:32:04.825081 | orchestrator | changed: [testbed-manager] 2026-04-13 00:32:04.825101 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:32:04.825119 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:32:04.825138 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:32:04.825154 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:32:04.825218 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:32:04.825238 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:32:04.825255 | orchestrator | 2026-04-13 00:32:04.825274 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-04-13 00:32:04.825293 | orchestrator | Monday 13 April 2026 00:31:54 +0000 (0:00:00.792) 0:05:11.002 ********** 2026-04-13 00:32:04.825311 | orchestrator | ok: [testbed-manager] 2026-04-13 00:32:04.825331 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:32:04.825349 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:32:04.825367 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:32:04.825438 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:32:04.825487 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:32:04.825507 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:32:04.825526 | orchestrator | 2026-04-13 00:32:04.825545 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-04-13 00:32:04.825584 | orchestrator | Monday 13 April 2026 00:31:56 +0000 (0:00:01.722) 0:05:12.725 ********** 2026-04-13 00:32:04.825604 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:32:04.825624 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:32:04.825643 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:32:04.825662 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:32:04.825681 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:32:04.825701 | orchestrator | changed: [testbed-manager] 2026-04-13 00:32:04.825719 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:32:04.825739 | orchestrator | 2026-04-13 00:32:04.825758 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-04-13 00:32:04.825776 | orchestrator | Monday 13 April 2026 00:31:57 +0000 (0:00:00.844) 0:05:13.569 ********** 2026-04-13 00:32:04.825793 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:32:04.825812 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:32:04.825830 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:32:04.825848 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:32:04.825867 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:32:04.825886 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:32:04.825904 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:32:04.825923 | orchestrator | 2026-04-13 00:32:04.825944 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-04-13 00:32:04.825962 | orchestrator | Monday 13 April 2026 00:31:57 +0000 (0:00:00.288) 0:05:13.858 ********** 2026-04-13 00:32:04.825978 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:32:04.825995 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:32:04.826011 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:32:04.826094 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:32:04.826113 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:32:04.826132 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:32:04.826150 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:32:04.826206 | orchestrator | 2026-04-13 00:32:04.826225 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-04-13 00:32:04.826243 | orchestrator | Monday 13 April 2026 00:31:57 +0000 (0:00:00.380) 0:05:14.238 ********** 2026-04-13 00:32:04.826261 | orchestrator | ok: [testbed-manager] 2026-04-13 00:32:04.826279 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:32:04.826298 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:32:04.826316 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:32:04.826333 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:32:04.826351 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:32:04.826368 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:32:04.826386 | orchestrator | 2026-04-13 00:32:04.826403 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-04-13 00:32:04.826421 | orchestrator | Monday 13 April 2026 00:31:58 +0000 (0:00:00.424) 0:05:14.662 ********** 2026-04-13 00:32:04.826502 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:32:04.826521 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:32:04.826538 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:32:04.826555 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:32:04.826573 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:32:04.826590 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:32:04.826608 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:32:04.826626 | orchestrator | 2026-04-13 00:32:04.826643 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-04-13 00:32:04.826662 | orchestrator | Monday 13 April 2026 00:31:58 +0000 (0:00:00.269) 0:05:14.932 ********** 2026-04-13 00:32:04.826680 | orchestrator | ok: [testbed-manager] 2026-04-13 00:32:04.826699 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:32:04.826736 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:32:04.826754 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:32:04.826770 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:32:04.826787 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:32:04.826805 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:32:04.826822 | orchestrator | 2026-04-13 00:32:04.826839 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-04-13 00:32:04.826855 | orchestrator | Monday 13 April 2026 00:31:58 +0000 (0:00:00.296) 0:05:15.228 ********** 2026-04-13 00:32:04.826872 | orchestrator | ok: [testbed-manager] =>  2026-04-13 00:32:04.826889 | orchestrator |  docker_version: 5:27.5.1 2026-04-13 00:32:04.826905 | orchestrator | ok: [testbed-node-0] =>  2026-04-13 00:32:04.826921 | orchestrator |  docker_version: 5:27.5.1 2026-04-13 00:32:04.826938 | orchestrator | ok: [testbed-node-1] =>  2026-04-13 00:32:04.826954 | orchestrator |  docker_version: 5:27.5.1 2026-04-13 00:32:04.826971 | orchestrator | ok: [testbed-node-2] =>  2026-04-13 00:32:04.826987 | orchestrator |  docker_version: 5:27.5.1 2026-04-13 00:32:04.827034 | orchestrator | ok: [testbed-node-3] =>  2026-04-13 00:32:04.827053 | orchestrator |  docker_version: 5:27.5.1 2026-04-13 00:32:04.827072 | orchestrator | ok: [testbed-node-4] =>  2026-04-13 00:32:04.827089 | orchestrator |  docker_version: 5:27.5.1 2026-04-13 00:32:04.827106 | orchestrator | ok: [testbed-node-5] =>  2026-04-13 00:32:04.827123 | orchestrator |  docker_version: 5:27.5.1 2026-04-13 00:32:04.827141 | orchestrator | 2026-04-13 00:32:04.827188 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-04-13 00:32:04.827208 | orchestrator | Monday 13 April 2026 00:31:59 +0000 (0:00:00.315) 0:05:15.544 ********** 2026-04-13 00:32:04.827227 | orchestrator | ok: [testbed-manager] =>  2026-04-13 00:32:04.827246 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-13 00:32:04.827263 | orchestrator | ok: [testbed-node-0] =>  2026-04-13 00:32:04.827281 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-13 00:32:04.827298 | orchestrator | ok: [testbed-node-1] =>  2026-04-13 00:32:04.827316 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-13 00:32:04.827334 | orchestrator | ok: [testbed-node-2] =>  2026-04-13 00:32:04.827351 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-13 00:32:04.827369 | orchestrator | ok: [testbed-node-3] =>  2026-04-13 00:32:04.827386 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-13 00:32:04.827403 | orchestrator | ok: [testbed-node-4] =>  2026-04-13 00:32:04.827419 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-13 00:32:04.827436 | orchestrator | ok: [testbed-node-5] =>  2026-04-13 00:32:04.827454 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-13 00:32:04.827471 | orchestrator | 2026-04-13 00:32:04.827489 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-04-13 00:32:04.827506 | orchestrator | Monday 13 April 2026 00:31:59 +0000 (0:00:00.283) 0:05:15.828 ********** 2026-04-13 00:32:04.827523 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:32:04.827540 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:32:04.827557 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:32:04.827573 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:32:04.827606 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:32:04.827624 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:32:04.827641 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:32:04.827659 | orchestrator | 2026-04-13 00:32:04.827677 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-04-13 00:32:04.827694 | orchestrator | Monday 13 April 2026 00:31:59 +0000 (0:00:00.300) 0:05:16.128 ********** 2026-04-13 00:32:04.827711 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:32:04.827728 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:32:04.827746 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:32:04.827763 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:32:04.827780 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:32:04.827799 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:32:04.827817 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:32:04.827850 | orchestrator | 2026-04-13 00:32:04.827869 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-04-13 00:32:04.827886 | orchestrator | Monday 13 April 2026 00:32:00 +0000 (0:00:00.286) 0:05:16.414 ********** 2026-04-13 00:32:04.827906 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:32:04.827927 | orchestrator | 2026-04-13 00:32:04.827944 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-04-13 00:32:04.827962 | orchestrator | Monday 13 April 2026 00:32:00 +0000 (0:00:00.431) 0:05:16.846 ********** 2026-04-13 00:32:04.827980 | orchestrator | ok: [testbed-manager] 2026-04-13 00:32:04.827998 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:32:04.828015 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:32:04.828033 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:32:04.828051 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:32:04.828068 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:32:04.828087 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:32:04.828105 | orchestrator | 2026-04-13 00:32:04.828123 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-04-13 00:32:04.828140 | orchestrator | Monday 13 April 2026 00:32:01 +0000 (0:00:00.814) 0:05:17.660 ********** 2026-04-13 00:32:04.828157 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:32:04.828254 | orchestrator | ok: [testbed-manager] 2026-04-13 00:32:04.828270 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:32:04.828285 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:32:04.828302 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:32:04.828319 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:32:04.828336 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:32:04.828350 | orchestrator | 2026-04-13 00:32:04.828367 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-04-13 00:32:04.828381 | orchestrator | Monday 13 April 2026 00:32:04 +0000 (0:00:03.025) 0:05:20.686 ********** 2026-04-13 00:32:04.828392 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-04-13 00:32:04.828402 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-04-13 00:32:04.828411 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-04-13 00:32:04.828421 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-04-13 00:32:04.828431 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-04-13 00:32:04.828440 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-04-13 00:32:04.828450 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:32:04.828459 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-04-13 00:32:04.828468 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-04-13 00:32:04.828478 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-04-13 00:32:04.828487 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:32:04.828496 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-04-13 00:32:04.828506 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-04-13 00:32:04.828515 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-04-13 00:32:04.828524 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:32:04.828534 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-04-13 00:32:04.828562 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-04-13 00:33:06.398303 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-04-13 00:33:06.398408 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:33:06.398422 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-04-13 00:33:06.398432 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-04-13 00:33:06.398442 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-04-13 00:33:06.398452 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:33:06.398485 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:33:06.398496 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-04-13 00:33:06.398505 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-04-13 00:33:06.398515 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-04-13 00:33:06.398524 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:33:06.398535 | orchestrator | 2026-04-13 00:33:06.398546 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-04-13 00:33:06.398558 | orchestrator | Monday 13 April 2026 00:32:05 +0000 (0:00:00.644) 0:05:21.330 ********** 2026-04-13 00:33:06.398567 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:06.398577 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:06.398587 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:06.398596 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:06.398605 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:06.398615 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:06.398624 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:06.398633 | orchestrator | 2026-04-13 00:33:06.398643 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-04-13 00:33:06.398652 | orchestrator | Monday 13 April 2026 00:32:11 +0000 (0:00:06.471) 0:05:27.802 ********** 2026-04-13 00:33:06.398662 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:06.398671 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:06.398681 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:06.398691 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:06.398700 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:06.398709 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:06.398719 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:06.398728 | orchestrator | 2026-04-13 00:33:06.398738 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-04-13 00:33:06.398747 | orchestrator | Monday 13 April 2026 00:32:12 +0000 (0:00:01.111) 0:05:28.913 ********** 2026-04-13 00:33:06.398757 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:06.398766 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:06.398775 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:06.398785 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:06.398794 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:06.398803 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:06.398813 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:06.398822 | orchestrator | 2026-04-13 00:33:06.398832 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-04-13 00:33:06.398843 | orchestrator | Monday 13 April 2026 00:32:21 +0000 (0:00:08.893) 0:05:37.806 ********** 2026-04-13 00:33:06.398854 | orchestrator | changed: [testbed-manager] 2026-04-13 00:33:06.398864 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:06.398875 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:06.398886 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:06.398897 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:06.398908 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:06.398919 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:06.398930 | orchestrator | 2026-04-13 00:33:06.398941 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-04-13 00:33:06.398953 | orchestrator | Monday 13 April 2026 00:32:24 +0000 (0:00:03.443) 0:05:41.250 ********** 2026-04-13 00:33:06.398963 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:06.398974 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:06.398985 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:06.398996 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:06.399007 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:06.399018 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:06.399030 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:06.399041 | orchestrator | 2026-04-13 00:33:06.399052 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-04-13 00:33:06.399071 | orchestrator | Monday 13 April 2026 00:32:26 +0000 (0:00:01.403) 0:05:42.654 ********** 2026-04-13 00:33:06.399082 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:06.399094 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:06.399106 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:06.399146 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:06.399157 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:06.399169 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:06.399180 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:06.399190 | orchestrator | 2026-04-13 00:33:06.399201 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-04-13 00:33:06.399213 | orchestrator | Monday 13 April 2026 00:32:27 +0000 (0:00:01.408) 0:05:44.062 ********** 2026-04-13 00:33:06.399224 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:33:06.399233 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:33:06.399243 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:33:06.399252 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:33:06.399261 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:33:06.399271 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:33:06.399280 | orchestrator | changed: [testbed-manager] 2026-04-13 00:33:06.399289 | orchestrator | 2026-04-13 00:33:06.399341 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-04-13 00:33:06.399352 | orchestrator | Monday 13 April 2026 00:32:28 +0000 (0:00:00.649) 0:05:44.711 ********** 2026-04-13 00:33:06.399362 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:06.399372 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:06.399381 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:06.399390 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:06.399400 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:06.399409 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:06.399419 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:06.399428 | orchestrator | 2026-04-13 00:33:06.399438 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-04-13 00:33:06.399465 | orchestrator | Monday 13 April 2026 00:32:38 +0000 (0:00:09.680) 0:05:54.392 ********** 2026-04-13 00:33:06.399476 | orchestrator | changed: [testbed-manager] 2026-04-13 00:33:06.399486 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:06.399495 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:06.399505 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:06.399514 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:06.399524 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:06.399533 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:06.399543 | orchestrator | 2026-04-13 00:33:06.399552 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-04-13 00:33:06.399562 | orchestrator | Monday 13 April 2026 00:32:39 +0000 (0:00:01.154) 0:05:55.547 ********** 2026-04-13 00:33:06.399571 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:06.399581 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:06.399590 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:06.399599 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:06.399609 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:06.399618 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:06.399628 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:06.399637 | orchestrator | 2026-04-13 00:33:06.399646 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-04-13 00:33:06.399656 | orchestrator | Monday 13 April 2026 00:32:48 +0000 (0:00:09.326) 0:06:04.873 ********** 2026-04-13 00:33:06.399666 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:06.399675 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:06.399684 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:06.399693 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:06.399703 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:06.399712 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:06.399729 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:06.399738 | orchestrator | 2026-04-13 00:33:06.399748 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-04-13 00:33:06.399762 | orchestrator | Monday 13 April 2026 00:32:59 +0000 (0:00:11.105) 0:06:15.979 ********** 2026-04-13 00:33:06.399772 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-04-13 00:33:06.399782 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-04-13 00:33:06.399791 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-04-13 00:33:06.399801 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-04-13 00:33:06.399811 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-04-13 00:33:06.399820 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-04-13 00:33:06.399829 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-04-13 00:33:06.399839 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-04-13 00:33:06.399848 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-04-13 00:33:06.399858 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-04-13 00:33:06.399867 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-04-13 00:33:06.399877 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-04-13 00:33:06.399886 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-04-13 00:33:06.399895 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-04-13 00:33:06.399905 | orchestrator | 2026-04-13 00:33:06.399915 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-04-13 00:33:06.399924 | orchestrator | Monday 13 April 2026 00:33:00 +0000 (0:00:01.229) 0:06:17.208 ********** 2026-04-13 00:33:06.399934 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:33:06.399943 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:33:06.399953 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:33:06.399962 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:33:06.399971 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:33:06.399981 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:33:06.399990 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:33:06.400000 | orchestrator | 2026-04-13 00:33:06.400016 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-04-13 00:33:06.400032 | orchestrator | Monday 13 April 2026 00:33:01 +0000 (0:00:00.687) 0:06:17.895 ********** 2026-04-13 00:33:06.400048 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:06.400063 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:06.400078 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:06.400095 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:06.400111 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:06.400145 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:06.400155 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:06.400164 | orchestrator | 2026-04-13 00:33:06.400174 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-04-13 00:33:06.400184 | orchestrator | Monday 13 April 2026 00:33:05 +0000 (0:00:03.925) 0:06:21.821 ********** 2026-04-13 00:33:06.400194 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:33:06.400203 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:33:06.400212 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:33:06.400222 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:33:06.400231 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:33:06.400241 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:33:06.400252 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:33:06.400262 | orchestrator | 2026-04-13 00:33:06.400273 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-04-13 00:33:06.400285 | orchestrator | Monday 13 April 2026 00:33:06 +0000 (0:00:00.546) 0:06:22.367 ********** 2026-04-13 00:33:06.400295 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-04-13 00:33:06.400307 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-04-13 00:33:06.400325 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:33:06.400336 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-04-13 00:33:06.400347 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-04-13 00:33:06.400358 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:33:06.400368 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-04-13 00:33:06.400379 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-04-13 00:33:06.400390 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:33:06.400408 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-04-13 00:33:26.236195 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-04-13 00:33:26.236303 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:33:26.236320 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-04-13 00:33:26.236333 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-04-13 00:33:26.236344 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:33:26.236354 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-04-13 00:33:26.236365 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-04-13 00:33:26.236376 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:33:26.236387 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-04-13 00:33:26.236398 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-04-13 00:33:26.236409 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:33:26.236420 | orchestrator | 2026-04-13 00:33:26.236432 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-04-13 00:33:26.236444 | orchestrator | Monday 13 April 2026 00:33:06 +0000 (0:00:00.592) 0:06:22.959 ********** 2026-04-13 00:33:26.236455 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:33:26.236466 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:33:26.236491 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:33:26.236503 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:33:26.236523 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:33:26.236534 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:33:26.236545 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:33:26.236556 | orchestrator | 2026-04-13 00:33:26.236567 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-04-13 00:33:26.236579 | orchestrator | Monday 13 April 2026 00:33:07 +0000 (0:00:00.517) 0:06:23.477 ********** 2026-04-13 00:33:26.236590 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:33:26.236601 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:33:26.236612 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:33:26.236623 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:33:26.236633 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:33:26.236644 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:33:26.236655 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:33:26.236666 | orchestrator | 2026-04-13 00:33:26.236677 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-04-13 00:33:26.236688 | orchestrator | Monday 13 April 2026 00:33:07 +0000 (0:00:00.715) 0:06:24.193 ********** 2026-04-13 00:33:26.236701 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:33:26.236713 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:33:26.236725 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:33:26.236737 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:33:26.236749 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:33:26.236761 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:33:26.236773 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:33:26.236785 | orchestrator | 2026-04-13 00:33:26.236797 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-04-13 00:33:26.236810 | orchestrator | Monday 13 April 2026 00:33:08 +0000 (0:00:00.529) 0:06:24.722 ********** 2026-04-13 00:33:26.236823 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:26.236863 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:33:26.236877 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:33:26.236889 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:33:26.236902 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:33:26.236914 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:33:26.236928 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:33:26.236940 | orchestrator | 2026-04-13 00:33:26.236953 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-04-13 00:33:26.236966 | orchestrator | Monday 13 April 2026 00:33:10 +0000 (0:00:01.790) 0:06:26.513 ********** 2026-04-13 00:33:26.236979 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:33:26.236994 | orchestrator | 2026-04-13 00:33:26.237007 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-04-13 00:33:26.237019 | orchestrator | Monday 13 April 2026 00:33:11 +0000 (0:00:00.995) 0:06:27.508 ********** 2026-04-13 00:33:26.237031 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:26.237044 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:26.237056 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:26.237066 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:26.237077 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:26.237087 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:26.237119 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:26.237131 | orchestrator | 2026-04-13 00:33:26.237142 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-04-13 00:33:26.237153 | orchestrator | Monday 13 April 2026 00:33:12 +0000 (0:00:01.293) 0:06:28.801 ********** 2026-04-13 00:33:26.237163 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:26.237174 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:26.237184 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:26.237195 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:26.237205 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:26.237216 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:26.237226 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:26.237237 | orchestrator | 2026-04-13 00:33:26.237248 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-04-13 00:33:26.237258 | orchestrator | Monday 13 April 2026 00:33:13 +0000 (0:00:00.928) 0:06:29.730 ********** 2026-04-13 00:33:26.237269 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:26.237280 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:26.237290 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:26.237301 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:26.237312 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:26.237323 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:26.237333 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:26.237344 | orchestrator | 2026-04-13 00:33:26.237355 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-04-13 00:33:26.237383 | orchestrator | Monday 13 April 2026 00:33:14 +0000 (0:00:01.360) 0:06:31.090 ********** 2026-04-13 00:33:26.237395 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:33:26.237406 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:33:26.237417 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:33:26.237427 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:33:26.237438 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:33:26.237448 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:33:26.237459 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:33:26.237469 | orchestrator | 2026-04-13 00:33:26.237480 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-04-13 00:33:26.237491 | orchestrator | Monday 13 April 2026 00:33:16 +0000 (0:00:01.397) 0:06:32.488 ********** 2026-04-13 00:33:26.237501 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:26.237512 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:26.237532 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:26.237543 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:26.237553 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:26.237564 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:26.237575 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:26.237585 | orchestrator | 2026-04-13 00:33:26.237596 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-04-13 00:33:26.237607 | orchestrator | Monday 13 April 2026 00:33:17 +0000 (0:00:01.529) 0:06:34.017 ********** 2026-04-13 00:33:26.237617 | orchestrator | changed: [testbed-manager] 2026-04-13 00:33:26.237628 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:26.237638 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:26.237649 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:26.237659 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:26.237670 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:26.237680 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:26.237691 | orchestrator | 2026-04-13 00:33:26.237702 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-04-13 00:33:26.237727 | orchestrator | Monday 13 April 2026 00:33:19 +0000 (0:00:01.436) 0:06:35.454 ********** 2026-04-13 00:33:26.237738 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:33:26.237749 | orchestrator | 2026-04-13 00:33:26.237760 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-04-13 00:33:26.237771 | orchestrator | Monday 13 April 2026 00:33:20 +0000 (0:00:00.892) 0:06:36.347 ********** 2026-04-13 00:33:26.237781 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:26.237792 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:33:26.237803 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:33:26.237813 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:33:26.237824 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:33:26.237835 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:33:26.237845 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:33:26.237856 | orchestrator | 2026-04-13 00:33:26.237867 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-04-13 00:33:26.237878 | orchestrator | Monday 13 April 2026 00:33:21 +0000 (0:00:01.328) 0:06:37.676 ********** 2026-04-13 00:33:26.237888 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:26.237899 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:33:26.237910 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:33:26.237920 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:33:26.237931 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:33:26.237941 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:33:26.237952 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:33:26.237963 | orchestrator | 2026-04-13 00:33:26.237973 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-04-13 00:33:26.237984 | orchestrator | Monday 13 April 2026 00:33:22 +0000 (0:00:01.306) 0:06:38.983 ********** 2026-04-13 00:33:26.237995 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:26.238006 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:33:26.238078 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:33:26.238093 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:33:26.238119 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:33:26.238130 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:33:26.238141 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:33:26.238151 | orchestrator | 2026-04-13 00:33:26.238162 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-04-13 00:33:26.238173 | orchestrator | Monday 13 April 2026 00:33:23 +0000 (0:00:01.121) 0:06:40.104 ********** 2026-04-13 00:33:26.238184 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:26.238194 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:33:26.238205 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:33:26.238216 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:33:26.238234 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:33:26.238245 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:33:26.238256 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:33:26.238266 | orchestrator | 2026-04-13 00:33:26.238277 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-04-13 00:33:26.238288 | orchestrator | Monday 13 April 2026 00:33:24 +0000 (0:00:01.144) 0:06:41.249 ********** 2026-04-13 00:33:26.238299 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:33:26.238309 | orchestrator | 2026-04-13 00:33:26.238320 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-13 00:33:26.238331 | orchestrator | Monday 13 April 2026 00:33:25 +0000 (0:00:00.929) 0:06:42.179 ********** 2026-04-13 00:33:26.238342 | orchestrator | 2026-04-13 00:33:26.238353 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-13 00:33:26.238364 | orchestrator | Monday 13 April 2026 00:33:26 +0000 (0:00:00.205) 0:06:42.384 ********** 2026-04-13 00:33:26.238374 | orchestrator | 2026-04-13 00:33:26.238385 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-13 00:33:26.238396 | orchestrator | Monday 13 April 2026 00:33:26 +0000 (0:00:00.049) 0:06:42.434 ********** 2026-04-13 00:33:26.238406 | orchestrator | 2026-04-13 00:33:26.238417 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-13 00:33:26.238435 | orchestrator | Monday 13 April 2026 00:33:26 +0000 (0:00:00.044) 0:06:42.478 ********** 2026-04-13 00:33:52.658355 | orchestrator | 2026-04-13 00:33:52.658455 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-13 00:33:52.658469 | orchestrator | Monday 13 April 2026 00:33:26 +0000 (0:00:00.053) 0:06:42.531 ********** 2026-04-13 00:33:52.658479 | orchestrator | 2026-04-13 00:33:52.658490 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-13 00:33:52.658499 | orchestrator | Monday 13 April 2026 00:33:26 +0000 (0:00:00.043) 0:06:42.575 ********** 2026-04-13 00:33:52.658509 | orchestrator | 2026-04-13 00:33:52.658519 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-13 00:33:52.658529 | orchestrator | Monday 13 April 2026 00:33:26 +0000 (0:00:00.042) 0:06:42.618 ********** 2026-04-13 00:33:52.658538 | orchestrator | 2026-04-13 00:33:52.658548 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-13 00:33:52.658558 | orchestrator | Monday 13 April 2026 00:33:26 +0000 (0:00:00.049) 0:06:42.667 ********** 2026-04-13 00:33:52.658567 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:33:52.658578 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:33:52.658588 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:33:52.658597 | orchestrator | 2026-04-13 00:33:52.658607 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-04-13 00:33:52.658617 | orchestrator | Monday 13 April 2026 00:33:27 +0000 (0:00:01.249) 0:06:43.916 ********** 2026-04-13 00:33:52.658645 | orchestrator | changed: [testbed-manager] 2026-04-13 00:33:52.658667 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:52.658677 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:52.658687 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:52.658697 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:52.658706 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:52.658716 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:52.658725 | orchestrator | 2026-04-13 00:33:52.658750 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-04-13 00:33:52.658760 | orchestrator | Monday 13 April 2026 00:33:29 +0000 (0:00:01.389) 0:06:45.306 ********** 2026-04-13 00:33:52.658769 | orchestrator | changed: [testbed-manager] 2026-04-13 00:33:52.658779 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:52.658788 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:52.658798 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:52.658828 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:52.658839 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:52.658848 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:52.658858 | orchestrator | 2026-04-13 00:33:52.658867 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-04-13 00:33:52.658877 | orchestrator | Monday 13 April 2026 00:33:30 +0000 (0:00:01.298) 0:06:46.604 ********** 2026-04-13 00:33:52.658886 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:33:52.658896 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:52.658906 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:52.658918 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:52.658928 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:52.658940 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:52.658951 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:52.658962 | orchestrator | 2026-04-13 00:33:52.658974 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-04-13 00:33:52.658985 | orchestrator | Monday 13 April 2026 00:33:32 +0000 (0:00:02.532) 0:06:49.137 ********** 2026-04-13 00:33:52.658996 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:33:52.659007 | orchestrator | 2026-04-13 00:33:52.659018 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-04-13 00:33:52.659030 | orchestrator | Monday 13 April 2026 00:33:32 +0000 (0:00:00.113) 0:06:49.250 ********** 2026-04-13 00:33:52.659041 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:52.659052 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:52.659063 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:52.659098 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:52.659110 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:52.659119 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:33:52.659129 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:52.659138 | orchestrator | 2026-04-13 00:33:52.659148 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-04-13 00:33:52.659158 | orchestrator | Monday 13 April 2026 00:33:34 +0000 (0:00:01.241) 0:06:50.491 ********** 2026-04-13 00:33:52.659168 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:33:52.659177 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:33:52.659186 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:33:52.659196 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:33:52.659205 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:33:52.659214 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:33:52.659224 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:33:52.659233 | orchestrator | 2026-04-13 00:33:52.659243 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-04-13 00:33:52.659252 | orchestrator | Monday 13 April 2026 00:33:34 +0000 (0:00:00.601) 0:06:51.093 ********** 2026-04-13 00:33:52.659262 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:33:52.659274 | orchestrator | 2026-04-13 00:33:52.659283 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-04-13 00:33:52.659293 | orchestrator | Monday 13 April 2026 00:33:35 +0000 (0:00:00.970) 0:06:52.064 ********** 2026-04-13 00:33:52.659303 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:52.659312 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:33:52.659322 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:33:52.659331 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:33:52.659340 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:33:52.659350 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:33:52.659359 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:33:52.659368 | orchestrator | 2026-04-13 00:33:52.659378 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-04-13 00:33:52.659387 | orchestrator | Monday 13 April 2026 00:33:36 +0000 (0:00:01.122) 0:06:53.187 ********** 2026-04-13 00:33:52.659405 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-04-13 00:33:52.659431 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-04-13 00:33:52.659442 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-04-13 00:33:52.659451 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-04-13 00:33:52.659461 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-04-13 00:33:52.659470 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-04-13 00:33:52.659479 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-04-13 00:33:52.659489 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-04-13 00:33:52.659499 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-04-13 00:33:52.659508 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-04-13 00:33:52.659517 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-04-13 00:33:52.659527 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-04-13 00:33:52.659536 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-04-13 00:33:52.659546 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-04-13 00:33:52.659555 | orchestrator | 2026-04-13 00:33:52.659565 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-04-13 00:33:52.659575 | orchestrator | Monday 13 April 2026 00:33:39 +0000 (0:00:02.549) 0:06:55.736 ********** 2026-04-13 00:33:52.659584 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:33:52.659594 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:33:52.659603 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:33:52.659612 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:33:52.659622 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:33:52.659631 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:33:52.659641 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:33:52.659651 | orchestrator | 2026-04-13 00:33:52.659660 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-04-13 00:33:52.659670 | orchestrator | Monday 13 April 2026 00:33:39 +0000 (0:00:00.502) 0:06:56.239 ********** 2026-04-13 00:33:52.659681 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:33:52.659693 | orchestrator | 2026-04-13 00:33:52.659703 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-04-13 00:33:52.659712 | orchestrator | Monday 13 April 2026 00:33:41 +0000 (0:00:01.071) 0:06:57.310 ********** 2026-04-13 00:33:52.659722 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:52.659731 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:33:52.659741 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:33:52.659750 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:33:52.659760 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:33:52.659769 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:33:52.659779 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:33:52.659788 | orchestrator | 2026-04-13 00:33:52.659797 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-04-13 00:33:52.659807 | orchestrator | Monday 13 April 2026 00:33:41 +0000 (0:00:00.874) 0:06:58.184 ********** 2026-04-13 00:33:52.659816 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:52.659826 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:33:52.659835 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:33:52.659845 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:33:52.659854 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:33:52.659863 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:33:52.659873 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:33:52.659882 | orchestrator | 2026-04-13 00:33:52.659892 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-04-13 00:33:52.659901 | orchestrator | Monday 13 April 2026 00:33:42 +0000 (0:00:00.842) 0:06:59.027 ********** 2026-04-13 00:33:52.659917 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:33:52.659927 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:33:52.659936 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:33:52.659946 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:33:52.659955 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:33:52.659964 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:33:52.659974 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:33:52.659983 | orchestrator | 2026-04-13 00:33:52.659993 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-04-13 00:33:52.660002 | orchestrator | Monday 13 April 2026 00:33:43 +0000 (0:00:00.556) 0:06:59.583 ********** 2026-04-13 00:33:52.660012 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:52.660021 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:33:52.660031 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:33:52.660040 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:33:52.660050 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:33:52.660059 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:33:52.660068 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:33:52.660100 | orchestrator | 2026-04-13 00:33:52.660117 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-04-13 00:33:52.660134 | orchestrator | Monday 13 April 2026 00:33:44 +0000 (0:00:01.469) 0:07:01.052 ********** 2026-04-13 00:33:52.660151 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:33:52.660166 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:33:52.660180 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:33:52.660189 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:33:52.660199 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:33:52.660208 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:33:52.660218 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:33:52.660227 | orchestrator | 2026-04-13 00:33:52.660237 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-04-13 00:33:52.660247 | orchestrator | Monday 13 April 2026 00:33:45 +0000 (0:00:00.690) 0:07:01.742 ********** 2026-04-13 00:33:52.660256 | orchestrator | ok: [testbed-manager] 2026-04-13 00:33:52.660265 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:33:52.660275 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:33:52.660284 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:33:52.660294 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:33:52.660303 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:33:52.660320 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:34:25.727466 | orchestrator | 2026-04-13 00:34:25.727555 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-04-13 00:34:25.727581 | orchestrator | Monday 13 April 2026 00:33:52 +0000 (0:00:07.234) 0:07:08.977 ********** 2026-04-13 00:34:25.727588 | orchestrator | ok: [testbed-manager] 2026-04-13 00:34:25.727596 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:34:25.727603 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:34:25.727609 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:34:25.727615 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:34:25.727622 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:34:25.727628 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:34:25.727634 | orchestrator | 2026-04-13 00:34:25.727641 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-04-13 00:34:25.727648 | orchestrator | Monday 13 April 2026 00:33:54 +0000 (0:00:01.347) 0:07:10.324 ********** 2026-04-13 00:34:25.727654 | orchestrator | ok: [testbed-manager] 2026-04-13 00:34:25.727660 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:34:25.727667 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:34:25.727673 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:34:25.727680 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:34:25.727686 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:34:25.727692 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:34:25.727698 | orchestrator | 2026-04-13 00:34:25.727724 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-04-13 00:34:25.727731 | orchestrator | Monday 13 April 2026 00:33:55 +0000 (0:00:01.757) 0:07:12.082 ********** 2026-04-13 00:34:25.727737 | orchestrator | ok: [testbed-manager] 2026-04-13 00:34:25.727744 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:34:25.727750 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:34:25.727756 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:34:25.727762 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:34:25.727771 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:34:25.727777 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:34:25.727783 | orchestrator | 2026-04-13 00:34:25.727789 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-13 00:34:25.727795 | orchestrator | Monday 13 April 2026 00:33:57 +0000 (0:00:01.951) 0:07:14.033 ********** 2026-04-13 00:34:25.727800 | orchestrator | ok: [testbed-manager] 2026-04-13 00:34:25.727806 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:34:25.727812 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:34:25.727818 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:34:25.727825 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:34:25.727831 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:34:25.727837 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:34:25.727843 | orchestrator | 2026-04-13 00:34:25.727849 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-13 00:34:25.727855 | orchestrator | Monday 13 April 2026 00:33:58 +0000 (0:00:00.914) 0:07:14.948 ********** 2026-04-13 00:34:25.727861 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:34:25.727867 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:34:25.727873 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:34:25.727879 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:34:25.727885 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:34:25.727891 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:34:25.727897 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:34:25.727903 | orchestrator | 2026-04-13 00:34:25.727909 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-04-13 00:34:25.727915 | orchestrator | Monday 13 April 2026 00:33:59 +0000 (0:00:00.878) 0:07:15.827 ********** 2026-04-13 00:34:25.727921 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:34:25.727927 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:34:25.727933 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:34:25.727939 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:34:25.727946 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:34:25.727952 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:34:25.727957 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:34:25.727963 | orchestrator | 2026-04-13 00:34:25.727970 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-04-13 00:34:25.727976 | orchestrator | Monday 13 April 2026 00:34:00 +0000 (0:00:00.716) 0:07:16.543 ********** 2026-04-13 00:34:25.727982 | orchestrator | ok: [testbed-manager] 2026-04-13 00:34:25.727988 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:34:25.727994 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:34:25.728000 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:34:25.728006 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:34:25.728012 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:34:25.728019 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:34:25.728025 | orchestrator | 2026-04-13 00:34:25.728031 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-04-13 00:34:25.728037 | orchestrator | Monday 13 April 2026 00:34:00 +0000 (0:00:00.576) 0:07:17.120 ********** 2026-04-13 00:34:25.728065 | orchestrator | ok: [testbed-manager] 2026-04-13 00:34:25.728072 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:34:25.728078 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:34:25.728083 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:34:25.728090 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:34:25.728096 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:34:25.728108 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:34:25.728114 | orchestrator | 2026-04-13 00:34:25.728121 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-04-13 00:34:25.728127 | orchestrator | Monday 13 April 2026 00:34:01 +0000 (0:00:00.557) 0:07:17.677 ********** 2026-04-13 00:34:25.728133 | orchestrator | ok: [testbed-manager] 2026-04-13 00:34:25.728139 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:34:25.728145 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:34:25.728151 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:34:25.728157 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:34:25.728163 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:34:25.728169 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:34:25.728174 | orchestrator | 2026-04-13 00:34:25.728181 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-04-13 00:34:25.728187 | orchestrator | Monday 13 April 2026 00:34:01 +0000 (0:00:00.551) 0:07:18.229 ********** 2026-04-13 00:34:25.728193 | orchestrator | ok: [testbed-manager] 2026-04-13 00:34:25.728199 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:34:25.728205 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:34:25.728211 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:34:25.728217 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:34:25.728222 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:34:25.728228 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:34:25.728234 | orchestrator | 2026-04-13 00:34:25.728256 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-04-13 00:34:25.728263 | orchestrator | Monday 13 April 2026 00:34:07 +0000 (0:00:05.689) 0:07:23.918 ********** 2026-04-13 00:34:25.728269 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:34:25.728276 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:34:25.728282 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:34:25.728288 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:34:25.728294 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:34:25.728300 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:34:25.728306 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:34:25.728312 | orchestrator | 2026-04-13 00:34:25.728319 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-04-13 00:34:25.728325 | orchestrator | Monday 13 April 2026 00:34:08 +0000 (0:00:00.899) 0:07:24.817 ********** 2026-04-13 00:34:25.728332 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:34:25.728340 | orchestrator | 2026-04-13 00:34:25.728347 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-04-13 00:34:25.728353 | orchestrator | Monday 13 April 2026 00:34:09 +0000 (0:00:00.897) 0:07:25.715 ********** 2026-04-13 00:34:25.728359 | orchestrator | ok: [testbed-manager] 2026-04-13 00:34:25.728365 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:34:25.728371 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:34:25.728377 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:34:25.728383 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:34:25.728388 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:34:25.728394 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:34:25.728400 | orchestrator | 2026-04-13 00:34:25.728411 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-04-13 00:34:25.728417 | orchestrator | Monday 13 April 2026 00:34:11 +0000 (0:00:02.102) 0:07:27.817 ********** 2026-04-13 00:34:25.728423 | orchestrator | ok: [testbed-manager] 2026-04-13 00:34:25.728429 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:34:25.728435 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:34:25.728441 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:34:25.728447 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:34:25.728453 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:34:25.728460 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:34:25.728466 | orchestrator | 2026-04-13 00:34:25.728473 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-04-13 00:34:25.728484 | orchestrator | Monday 13 April 2026 00:34:12 +0000 (0:00:01.343) 0:07:29.161 ********** 2026-04-13 00:34:25.728490 | orchestrator | ok: [testbed-manager] 2026-04-13 00:34:25.728496 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:34:25.728502 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:34:25.728508 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:34:25.728514 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:34:25.728520 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:34:25.728526 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:34:25.728533 | orchestrator | 2026-04-13 00:34:25.728539 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-04-13 00:34:25.728545 | orchestrator | Monday 13 April 2026 00:34:13 +0000 (0:00:00.970) 0:07:30.132 ********** 2026-04-13 00:34:25.728552 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-13 00:34:25.728559 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-13 00:34:25.728565 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-13 00:34:25.728571 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-13 00:34:25.728578 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-13 00:34:25.728583 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-13 00:34:25.728589 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-13 00:34:25.728595 | orchestrator | 2026-04-13 00:34:25.728602 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-04-13 00:34:25.728608 | orchestrator | Monday 13 April 2026 00:34:15 +0000 (0:00:01.761) 0:07:31.894 ********** 2026-04-13 00:34:25.728614 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:34:25.728621 | orchestrator | 2026-04-13 00:34:25.728627 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-04-13 00:34:25.728632 | orchestrator | Monday 13 April 2026 00:34:16 +0000 (0:00:01.042) 0:07:32.936 ********** 2026-04-13 00:34:25.728639 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:34:25.728645 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:34:25.728651 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:34:25.728657 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:34:25.728662 | orchestrator | changed: [testbed-manager] 2026-04-13 00:34:25.728669 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:34:25.728675 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:34:25.728681 | orchestrator | 2026-04-13 00:34:25.728691 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-04-13 00:34:56.243757 | orchestrator | Monday 13 April 2026 00:34:25 +0000 (0:00:09.035) 0:07:41.972 ********** 2026-04-13 00:34:56.243921 | orchestrator | ok: [testbed-manager] 2026-04-13 00:34:56.243948 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:34:56.243969 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:34:56.243987 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:34:56.244005 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:34:56.244050 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:34:56.244069 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:34:56.244089 | orchestrator | 2026-04-13 00:34:56.244110 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-04-13 00:34:56.244165 | orchestrator | Monday 13 April 2026 00:34:27 +0000 (0:00:01.776) 0:07:43.748 ********** 2026-04-13 00:34:56.244185 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:34:56.244205 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:34:56.244224 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:34:56.244243 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:34:56.244263 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:34:56.244284 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:34:56.244306 | orchestrator | 2026-04-13 00:34:56.244328 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-04-13 00:34:56.244348 | orchestrator | Monday 13 April 2026 00:34:29 +0000 (0:00:01.522) 0:07:45.271 ********** 2026-04-13 00:34:56.244369 | orchestrator | changed: [testbed-manager] 2026-04-13 00:34:56.244390 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:34:56.244408 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:34:56.244427 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:34:56.244448 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:34:56.244468 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:34:56.244487 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:34:56.244507 | orchestrator | 2026-04-13 00:34:56.244528 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-04-13 00:34:56.244547 | orchestrator | 2026-04-13 00:34:56.244587 | orchestrator | TASK [Include hardening role] ************************************************** 2026-04-13 00:34:56.244608 | orchestrator | Monday 13 April 2026 00:34:30 +0000 (0:00:01.189) 0:07:46.461 ********** 2026-04-13 00:34:56.244627 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:34:56.244645 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:34:56.244664 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:34:56.244684 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:34:56.244701 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:34:56.244719 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:34:56.244738 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:34:56.244756 | orchestrator | 2026-04-13 00:34:56.244774 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-04-13 00:34:56.244793 | orchestrator | 2026-04-13 00:34:56.244810 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-04-13 00:34:56.244829 | orchestrator | Monday 13 April 2026 00:34:30 +0000 (0:00:00.660) 0:07:47.121 ********** 2026-04-13 00:34:56.244848 | orchestrator | changed: [testbed-manager] 2026-04-13 00:34:56.244865 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:34:56.244883 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:34:56.244894 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:34:56.244905 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:34:56.244915 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:34:56.244925 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:34:56.244936 | orchestrator | 2026-04-13 00:34:56.244946 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-04-13 00:34:56.244957 | orchestrator | Monday 13 April 2026 00:34:32 +0000 (0:00:01.345) 0:07:48.466 ********** 2026-04-13 00:34:56.244968 | orchestrator | ok: [testbed-manager] 2026-04-13 00:34:56.244978 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:34:56.244989 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:34:56.244999 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:34:56.245010 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:34:56.245051 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:34:56.245062 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:34:56.245072 | orchestrator | 2026-04-13 00:34:56.245083 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-04-13 00:34:56.245093 | orchestrator | Monday 13 April 2026 00:34:33 +0000 (0:00:01.593) 0:07:50.060 ********** 2026-04-13 00:34:56.245104 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:34:56.245114 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:34:56.245125 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:34:56.245152 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:34:56.245163 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:34:56.245173 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:34:56.245184 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:34:56.245194 | orchestrator | 2026-04-13 00:34:56.245205 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-04-13 00:34:56.245216 | orchestrator | Monday 13 April 2026 00:34:34 +0000 (0:00:00.523) 0:07:50.584 ********** 2026-04-13 00:34:56.245227 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:34:56.245240 | orchestrator | 2026-04-13 00:34:56.245251 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-04-13 00:34:56.245261 | orchestrator | Monday 13 April 2026 00:34:35 +0000 (0:00:00.838) 0:07:51.422 ********** 2026-04-13 00:34:56.245274 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:34:56.245288 | orchestrator | 2026-04-13 00:34:56.245298 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-04-13 00:34:56.245309 | orchestrator | Monday 13 April 2026 00:34:36 +0000 (0:00:00.974) 0:07:52.396 ********** 2026-04-13 00:34:56.245319 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:34:56.245330 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:34:56.245340 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:34:56.245351 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:34:56.245361 | orchestrator | changed: [testbed-manager] 2026-04-13 00:34:56.245372 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:34:56.245382 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:34:56.245393 | orchestrator | 2026-04-13 00:34:56.245426 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-04-13 00:34:56.245437 | orchestrator | Monday 13 April 2026 00:34:44 +0000 (0:00:08.303) 0:08:00.700 ********** 2026-04-13 00:34:56.245448 | orchestrator | changed: [testbed-manager] 2026-04-13 00:34:56.245458 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:34:56.245469 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:34:56.245479 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:34:56.245490 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:34:56.245500 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:34:56.245511 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:34:56.245521 | orchestrator | 2026-04-13 00:34:56.245532 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-04-13 00:34:56.245543 | orchestrator | Monday 13 April 2026 00:34:45 +0000 (0:00:00.874) 0:08:01.575 ********** 2026-04-13 00:34:56.245553 | orchestrator | changed: [testbed-manager] 2026-04-13 00:34:56.245563 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:34:56.245574 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:34:56.245584 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:34:56.245595 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:34:56.245605 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:34:56.245616 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:34:56.245626 | orchestrator | 2026-04-13 00:34:56.245637 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-04-13 00:34:56.245647 | orchestrator | Monday 13 April 2026 00:34:46 +0000 (0:00:01.352) 0:08:02.927 ********** 2026-04-13 00:34:56.245658 | orchestrator | changed: [testbed-manager] 2026-04-13 00:34:56.245668 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:34:56.245679 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:34:56.245689 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:34:56.245699 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:34:56.245718 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:34:56.245729 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:34:56.245748 | orchestrator | 2026-04-13 00:34:56.245759 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-04-13 00:34:56.245770 | orchestrator | Monday 13 April 2026 00:34:48 +0000 (0:00:01.997) 0:08:04.925 ********** 2026-04-13 00:34:56.245781 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:34:56.245791 | orchestrator | changed: [testbed-manager] 2026-04-13 00:34:56.245802 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:34:56.245812 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:34:56.245823 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:34:56.245833 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:34:56.245843 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:34:56.245854 | orchestrator | 2026-04-13 00:34:56.245864 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-04-13 00:34:56.245875 | orchestrator | Monday 13 April 2026 00:34:50 +0000 (0:00:01.390) 0:08:06.316 ********** 2026-04-13 00:34:56.245885 | orchestrator | changed: [testbed-manager] 2026-04-13 00:34:56.245896 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:34:56.245906 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:34:56.245917 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:34:56.245927 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:34:56.245937 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:34:56.245948 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:34:56.245958 | orchestrator | 2026-04-13 00:34:56.245969 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-04-13 00:34:56.245980 | orchestrator | 2026-04-13 00:34:56.245990 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-04-13 00:34:56.246001 | orchestrator | Monday 13 April 2026 00:34:51 +0000 (0:00:01.145) 0:08:07.461 ********** 2026-04-13 00:34:56.246116 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:34:56.246134 | orchestrator | 2026-04-13 00:34:56.246144 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-13 00:34:56.246155 | orchestrator | Monday 13 April 2026 00:34:52 +0000 (0:00:01.069) 0:08:08.531 ********** 2026-04-13 00:34:56.246165 | orchestrator | ok: [testbed-manager] 2026-04-13 00:34:56.246176 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:34:56.246187 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:34:56.246197 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:34:56.246208 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:34:56.246218 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:34:56.246229 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:34:56.246240 | orchestrator | 2026-04-13 00:34:56.246251 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-13 00:34:56.246261 | orchestrator | Monday 13 April 2026 00:34:53 +0000 (0:00:00.860) 0:08:09.392 ********** 2026-04-13 00:34:56.246272 | orchestrator | changed: [testbed-manager] 2026-04-13 00:34:56.246283 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:34:56.246293 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:34:56.246304 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:34:56.246314 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:34:56.246324 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:34:56.246335 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:34:56.246345 | orchestrator | 2026-04-13 00:34:56.246356 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-04-13 00:34:56.246367 | orchestrator | Monday 13 April 2026 00:34:54 +0000 (0:00:01.335) 0:08:10.727 ********** 2026-04-13 00:34:56.246377 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:34:56.246388 | orchestrator | 2026-04-13 00:34:56.246399 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-13 00:34:56.246409 | orchestrator | Monday 13 April 2026 00:34:55 +0000 (0:00:00.931) 0:08:11.658 ********** 2026-04-13 00:34:56.246420 | orchestrator | ok: [testbed-manager] 2026-04-13 00:34:56.246436 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:34:56.246447 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:34:56.246457 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:34:56.246524 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:34:56.246537 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:34:56.246547 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:34:56.246558 | orchestrator | 2026-04-13 00:34:56.246579 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-13 00:34:57.895248 | orchestrator | Monday 13 April 2026 00:34:56 +0000 (0:00:00.831) 0:08:12.489 ********** 2026-04-13 00:34:57.895346 | orchestrator | changed: [testbed-manager] 2026-04-13 00:34:57.895361 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:34:57.895372 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:34:57.895381 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:34:57.895391 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:34:57.895401 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:34:57.895410 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:34:57.895420 | orchestrator | 2026-04-13 00:34:57.895430 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:34:57.895442 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-13 00:34:57.895454 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-13 00:34:57.895463 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-13 00:34:57.895473 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-13 00:34:57.895501 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-13 00:34:57.895512 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-13 00:34:57.895522 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-13 00:34:57.895531 | orchestrator | 2026-04-13 00:34:57.895541 | orchestrator | 2026-04-13 00:34:57.895551 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:34:57.895561 | orchestrator | Monday 13 April 2026 00:34:57 +0000 (0:00:01.293) 0:08:13.783 ********** 2026-04-13 00:34:57.895571 | orchestrator | =============================================================================== 2026-04-13 00:34:57.895581 | orchestrator | osism.commons.packages : Install required packages --------------------- 79.23s 2026-04-13 00:34:57.895591 | orchestrator | osism.commons.packages : Download required packages -------------------- 38.86s 2026-04-13 00:34:57.895601 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.99s 2026-04-13 00:34:57.895610 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.83s 2026-04-13 00:34:57.895620 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.20s 2026-04-13 00:34:57.895630 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.24s 2026-04-13 00:34:57.895640 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.11s 2026-04-13 00:34:57.895650 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.68s 2026-04-13 00:34:57.895659 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.33s 2026-04-13 00:34:57.895669 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.04s 2026-04-13 00:34:57.895700 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.89s 2026-04-13 00:34:57.895710 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.66s 2026-04-13 00:34:57.895719 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.30s 2026-04-13 00:34:57.895729 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.04s 2026-04-13 00:34:57.895739 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.54s 2026-04-13 00:34:57.895748 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.23s 2026-04-13 00:34:57.895758 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.47s 2026-04-13 00:34:57.895767 | orchestrator | osism.commons.services : Populate service facts ------------------------- 6.46s 2026-04-13 00:34:57.895777 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.71s 2026-04-13 00:34:57.895786 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.69s 2026-04-13 00:34:58.106441 | orchestrator | + osism apply fail2ban 2026-04-13 00:35:09.953640 | orchestrator | 2026-04-13 00:35:09 | INFO  | Prepare task for execution of fail2ban. 2026-04-13 00:35:10.045183 | orchestrator | 2026-04-13 00:35:10 | INFO  | Task 1a48da8d-dee8-4a34-8176-6167f347025e (fail2ban) was prepared for execution. 2026-04-13 00:35:10.045275 | orchestrator | 2026-04-13 00:35:10 | INFO  | It takes a moment until task 1a48da8d-dee8-4a34-8176-6167f347025e (fail2ban) has been started and output is visible here. 2026-04-13 00:35:32.314380 | orchestrator | 2026-04-13 00:35:32.314548 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-04-13 00:35:32.314579 | orchestrator | 2026-04-13 00:35:32.314597 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-04-13 00:35:32.314614 | orchestrator | Monday 13 April 2026 00:35:13 +0000 (0:00:00.391) 0:00:00.391 ********** 2026-04-13 00:35:32.314670 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:35:32.314693 | orchestrator | 2026-04-13 00:35:32.314710 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-04-13 00:35:32.314727 | orchestrator | Monday 13 April 2026 00:35:15 +0000 (0:00:01.271) 0:00:01.662 ********** 2026-04-13 00:35:32.314742 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:35:32.314758 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:35:32.314774 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:35:32.314791 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:35:32.314806 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:35:32.314822 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:35:32.314839 | orchestrator | changed: [testbed-manager] 2026-04-13 00:35:32.314855 | orchestrator | 2026-04-13 00:35:32.314872 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-04-13 00:35:32.314888 | orchestrator | Monday 13 April 2026 00:35:26 +0000 (0:00:11.320) 0:00:12.983 ********** 2026-04-13 00:35:32.314905 | orchestrator | changed: [testbed-manager] 2026-04-13 00:35:32.314922 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:35:32.314938 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:35:32.314949 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:35:32.314959 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:35:32.314969 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:35:32.315000 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:35:32.315010 | orchestrator | 2026-04-13 00:35:32.315020 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-04-13 00:35:32.315031 | orchestrator | Monday 13 April 2026 00:35:28 +0000 (0:00:01.711) 0:00:14.694 ********** 2026-04-13 00:35:32.315040 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:35:32.315051 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:35:32.315085 | orchestrator | ok: [testbed-manager] 2026-04-13 00:35:32.315095 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:35:32.315105 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:35:32.315114 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:35:32.315123 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:35:32.315133 | orchestrator | 2026-04-13 00:35:32.315142 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-04-13 00:35:32.315152 | orchestrator | Monday 13 April 2026 00:35:30 +0000 (0:00:02.046) 0:00:16.740 ********** 2026-04-13 00:35:32.315162 | orchestrator | changed: [testbed-manager] 2026-04-13 00:35:32.315171 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:35:32.315180 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:35:32.315190 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:35:32.315199 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:35:32.315208 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:35:32.315217 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:35:32.315227 | orchestrator | 2026-04-13 00:35:32.315236 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:35:32.315246 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:35:32.315257 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:35:32.315267 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:35:32.315276 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:35:32.315286 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:35:32.315295 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:35:32.315305 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:35:32.315314 | orchestrator | 2026-04-13 00:35:32.315324 | orchestrator | 2026-04-13 00:35:32.315334 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:35:32.315343 | orchestrator | Monday 13 April 2026 00:35:31 +0000 (0:00:01.708) 0:00:18.449 ********** 2026-04-13 00:35:32.315353 | orchestrator | =============================================================================== 2026-04-13 00:35:32.315362 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.32s 2026-04-13 00:35:32.315384 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 2.05s 2026-04-13 00:35:32.315394 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.71s 2026-04-13 00:35:32.315403 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.71s 2026-04-13 00:35:32.315413 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.27s 2026-04-13 00:35:32.520866 | orchestrator | + osism apply network 2026-04-13 00:35:43.923675 | orchestrator | 2026-04-13 00:35:43 | INFO  | Prepare task for execution of network. 2026-04-13 00:35:44.004125 | orchestrator | 2026-04-13 00:35:44 | INFO  | Task bb1e7690-133d-4a43-a960-1353b392f917 (network) was prepared for execution. 2026-04-13 00:35:44.004219 | orchestrator | 2026-04-13 00:35:44 | INFO  | It takes a moment until task bb1e7690-133d-4a43-a960-1353b392f917 (network) has been started and output is visible here. 2026-04-13 00:36:12.996798 | orchestrator | 2026-04-13 00:36:12.996899 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-04-13 00:36:12.996914 | orchestrator | 2026-04-13 00:36:12.997009 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-04-13 00:36:12.997022 | orchestrator | Monday 13 April 2026 00:35:47 +0000 (0:00:00.363) 0:00:00.363 ********** 2026-04-13 00:36:12.997032 | orchestrator | ok: [testbed-manager] 2026-04-13 00:36:12.997043 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:36:12.997052 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:36:12.997062 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:36:12.997071 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:36:12.997080 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:36:12.997090 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:36:12.997099 | orchestrator | 2026-04-13 00:36:12.997108 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-04-13 00:36:12.997118 | orchestrator | Monday 13 April 2026 00:35:48 +0000 (0:00:00.675) 0:00:01.039 ********** 2026-04-13 00:36:12.997130 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:36:12.997142 | orchestrator | 2026-04-13 00:36:12.997151 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-04-13 00:36:12.997161 | orchestrator | Monday 13 April 2026 00:35:49 +0000 (0:00:01.318) 0:00:02.358 ********** 2026-04-13 00:36:12.997170 | orchestrator | ok: [testbed-manager] 2026-04-13 00:36:12.997180 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:36:12.997189 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:36:12.997198 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:36:12.997222 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:36:12.997232 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:36:12.997242 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:36:12.997253 | orchestrator | 2026-04-13 00:36:12.997264 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-04-13 00:36:12.997275 | orchestrator | Monday 13 April 2026 00:35:51 +0000 (0:00:02.422) 0:00:04.781 ********** 2026-04-13 00:36:12.997286 | orchestrator | ok: [testbed-manager] 2026-04-13 00:36:12.997298 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:36:12.997308 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:36:12.997318 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:36:12.997327 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:36:12.997336 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:36:12.997346 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:36:12.997355 | orchestrator | 2026-04-13 00:36:12.997364 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-04-13 00:36:12.997374 | orchestrator | Monday 13 April 2026 00:35:53 +0000 (0:00:01.536) 0:00:06.318 ********** 2026-04-13 00:36:12.997384 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-04-13 00:36:12.997393 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-04-13 00:36:12.997403 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-04-13 00:36:12.997412 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-04-13 00:36:12.997421 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-04-13 00:36:12.997431 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-04-13 00:36:12.997440 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-04-13 00:36:12.997449 | orchestrator | 2026-04-13 00:36:12.997459 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-04-13 00:36:12.997468 | orchestrator | Monday 13 April 2026 00:35:54 +0000 (0:00:01.185) 0:00:07.504 ********** 2026-04-13 00:36:12.997478 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-13 00:36:12.997488 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-13 00:36:12.997497 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-13 00:36:12.997507 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-13 00:36:12.997516 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-13 00:36:12.997526 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-13 00:36:12.997535 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-13 00:36:12.997552 | orchestrator | 2026-04-13 00:36:12.997561 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-04-13 00:36:12.997570 | orchestrator | Monday 13 April 2026 00:35:58 +0000 (0:00:03.464) 0:00:10.969 ********** 2026-04-13 00:36:12.997580 | orchestrator | changed: [testbed-manager] 2026-04-13 00:36:12.997590 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:36:12.997599 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:36:12.997608 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:36:12.997618 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:36:12.997627 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:36:12.997636 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:36:12.997646 | orchestrator | 2026-04-13 00:36:12.997655 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-04-13 00:36:12.997665 | orchestrator | Monday 13 April 2026 00:35:59 +0000 (0:00:01.677) 0:00:12.646 ********** 2026-04-13 00:36:12.997674 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-13 00:36:12.997684 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-13 00:36:12.997693 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-13 00:36:12.997702 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-13 00:36:12.997712 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-13 00:36:12.997721 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-13 00:36:12.997731 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-13 00:36:12.997740 | orchestrator | 2026-04-13 00:36:12.997750 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-04-13 00:36:12.997759 | orchestrator | Monday 13 April 2026 00:36:01 +0000 (0:00:01.973) 0:00:14.620 ********** 2026-04-13 00:36:12.997769 | orchestrator | ok: [testbed-manager] 2026-04-13 00:36:12.997778 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:36:12.997788 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:36:12.997797 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:36:12.997807 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:36:12.997816 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:36:12.997825 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:36:12.997834 | orchestrator | 2026-04-13 00:36:12.997844 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-04-13 00:36:12.997871 | orchestrator | Monday 13 April 2026 00:36:02 +0000 (0:00:00.983) 0:00:15.603 ********** 2026-04-13 00:36:12.997882 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:36:12.997891 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:36:12.997900 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:36:12.997910 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:36:12.997919 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:36:12.997948 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:36:12.997958 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:36:12.997968 | orchestrator | 2026-04-13 00:36:12.997977 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-04-13 00:36:12.997987 | orchestrator | Monday 13 April 2026 00:36:03 +0000 (0:00:00.825) 0:00:16.429 ********** 2026-04-13 00:36:12.997996 | orchestrator | ok: [testbed-manager] 2026-04-13 00:36:12.998006 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:36:12.998232 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:36:12.998248 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:36:12.998258 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:36:12.998268 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:36:12.998277 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:36:12.998287 | orchestrator | 2026-04-13 00:36:12.998296 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-04-13 00:36:12.998306 | orchestrator | Monday 13 April 2026 00:36:05 +0000 (0:00:02.013) 0:00:18.442 ********** 2026-04-13 00:36:12.998316 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:36:12.998325 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:36:12.998335 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:36:12.998344 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:36:12.998362 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:36:12.998371 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:36:12.998387 | orchestrator | changed: [testbed-manager] => (item={'src': '/opt/configuration/network/iptables.sh', 'dest': 'routable.d/iptables.sh'}) 2026-04-13 00:36:12.998399 | orchestrator | 2026-04-13 00:36:12.998408 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-04-13 00:36:12.998418 | orchestrator | Monday 13 April 2026 00:36:06 +0000 (0:00:01.013) 0:00:19.456 ********** 2026-04-13 00:36:12.998428 | orchestrator | ok: [testbed-manager] 2026-04-13 00:36:12.998437 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:36:12.998447 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:36:12.998456 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:36:12.998465 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:36:12.998475 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:36:12.998484 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:36:12.998494 | orchestrator | 2026-04-13 00:36:12.998503 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-04-13 00:36:12.998513 | orchestrator | Monday 13 April 2026 00:36:07 +0000 (0:00:01.425) 0:00:20.881 ********** 2026-04-13 00:36:12.998523 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:36:12.998534 | orchestrator | 2026-04-13 00:36:12.998544 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-13 00:36:12.998553 | orchestrator | Monday 13 April 2026 00:36:09 +0000 (0:00:01.339) 0:00:22.221 ********** 2026-04-13 00:36:12.998563 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:36:12.998572 | orchestrator | ok: [testbed-manager] 2026-04-13 00:36:12.998581 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:36:12.998591 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:36:12.998601 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:36:12.998610 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:36:12.998619 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:36:12.998629 | orchestrator | 2026-04-13 00:36:12.998638 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-04-13 00:36:12.998648 | orchestrator | Monday 13 April 2026 00:36:11 +0000 (0:00:01.768) 0:00:23.989 ********** 2026-04-13 00:36:12.998657 | orchestrator | ok: [testbed-manager] 2026-04-13 00:36:12.998667 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:36:12.998676 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:36:12.998686 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:36:12.998695 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:36:12.998704 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:36:12.998714 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:36:12.998723 | orchestrator | 2026-04-13 00:36:12.998733 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-13 00:36:12.998743 | orchestrator | Monday 13 April 2026 00:36:11 +0000 (0:00:00.875) 0:00:24.865 ********** 2026-04-13 00:36:12.998752 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-04-13 00:36:12.998762 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-04-13 00:36:12.998771 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-04-13 00:36:12.998781 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-04-13 00:36:12.998790 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-13 00:36:12.998800 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-04-13 00:36:12.998809 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-13 00:36:12.998819 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-04-13 00:36:12.998828 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-13 00:36:12.998844 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-04-13 00:36:12.998853 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-13 00:36:12.998863 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-13 00:36:12.998872 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-13 00:36:12.998882 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-13 00:36:12.998891 | orchestrator | 2026-04-13 00:36:12.998910 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-04-13 00:36:30.214367 | orchestrator | Monday 13 April 2026 00:36:12 +0000 (0:00:01.023) 0:00:25.888 ********** 2026-04-13 00:36:30.214464 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:36:30.214478 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:36:30.214488 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:36:30.214496 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:36:30.214504 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:36:30.214512 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:36:30.214520 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:36:30.214528 | orchestrator | 2026-04-13 00:36:30.214537 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-04-13 00:36:30.214545 | orchestrator | Monday 13 April 2026 00:36:13 +0000 (0:00:00.795) 0:00:26.684 ********** 2026-04-13 00:36:30.214555 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-2, testbed-node-0, testbed-manager, testbed-node-1, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:36:30.214565 | orchestrator | 2026-04-13 00:36:30.214573 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-04-13 00:36:30.214581 | orchestrator | Monday 13 April 2026 00:36:18 +0000 (0:00:04.779) 0:00:31.463 ********** 2026-04-13 00:36:30.214605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-13 00:36:30.214614 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-13 00:36:30.214624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-13 00:36:30.214633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-13 00:36:30.214647 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-13 00:36:30.214656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-13 00:36:30.214664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-13 00:36:30.214691 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-13 00:36:30.214700 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-13 00:36:30.214708 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-13 00:36:30.214716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-13 00:36:30.214738 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-13 00:36:30.214747 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-13 00:36:30.214755 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-13 00:36:30.214763 | orchestrator | 2026-04-13 00:36:30.214771 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-04-13 00:36:30.214779 | orchestrator | Monday 13 April 2026 00:36:24 +0000 (0:00:05.906) 0:00:37.370 ********** 2026-04-13 00:36:30.214787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-13 00:36:30.214799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-13 00:36:30.214807 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-13 00:36:30.214815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-13 00:36:30.214823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-13 00:36:30.214831 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-13 00:36:30.214845 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-13 00:36:30.214853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-13 00:36:30.214861 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-13 00:36:30.214869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-13 00:36:30.214877 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-13 00:36:30.214884 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-13 00:36:30.214898 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-13 00:36:43.695819 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-13 00:36:43.695978 | orchestrator | 2026-04-13 00:36:43.695994 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-04-13 00:36:43.696005 | orchestrator | Monday 13 April 2026 00:36:30 +0000 (0:00:06.038) 0:00:43.408 ********** 2026-04-13 00:36:43.696016 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:36:43.696025 | orchestrator | 2026-04-13 00:36:43.696034 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-13 00:36:43.696046 | orchestrator | Monday 13 April 2026 00:36:31 +0000 (0:00:01.383) 0:00:44.792 ********** 2026-04-13 00:36:43.696062 | orchestrator | ok: [testbed-manager] 2026-04-13 00:36:43.696085 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:36:43.696105 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:36:43.696119 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:36:43.696150 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:36:43.696165 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:36:43.696180 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:36:43.696194 | orchestrator | 2026-04-13 00:36:43.696209 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-13 00:36:43.696224 | orchestrator | Monday 13 April 2026 00:36:33 +0000 (0:00:01.132) 0:00:45.925 ********** 2026-04-13 00:36:43.696240 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-13 00:36:43.696258 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-13 00:36:43.696298 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-13 00:36:43.696309 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-13 00:36:43.696317 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-13 00:36:43.696326 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-13 00:36:43.696335 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-13 00:36:43.696343 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-13 00:36:43.696352 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:36:43.696362 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-13 00:36:43.696373 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-13 00:36:43.696383 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-13 00:36:43.696393 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-13 00:36:43.696403 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:36:43.696412 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-13 00:36:43.696423 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-13 00:36:43.696433 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-13 00:36:43.696442 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-13 00:36:43.696452 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:36:43.696461 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-13 00:36:43.696472 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-13 00:36:43.696482 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-13 00:36:43.696491 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-13 00:36:43.696501 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:36:43.696511 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-13 00:36:43.696521 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-13 00:36:43.696531 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-13 00:36:43.696540 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-13 00:36:43.696553 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:36:43.696568 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:36:43.696582 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-13 00:36:43.696607 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-13 00:36:43.696621 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-13 00:36:43.696634 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-13 00:36:43.696647 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:36:43.696661 | orchestrator | 2026-04-13 00:36:43.696674 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-04-13 00:36:43.696708 | orchestrator | Monday 13 April 2026 00:36:33 +0000 (0:00:00.801) 0:00:46.727 ********** 2026-04-13 00:36:43.696724 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:36:43.696740 | orchestrator | 2026-04-13 00:36:43.696755 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-04-13 00:36:43.696781 | orchestrator | Monday 13 April 2026 00:36:35 +0000 (0:00:01.285) 0:00:48.012 ********** 2026-04-13 00:36:43.696795 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:36:43.696809 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:36:43.696817 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:36:43.696826 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:36:43.696835 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:36:43.696843 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:36:43.696852 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:36:43.696860 | orchestrator | 2026-04-13 00:36:43.696869 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-04-13 00:36:43.696878 | orchestrator | Monday 13 April 2026 00:36:35 +0000 (0:00:00.800) 0:00:48.812 ********** 2026-04-13 00:36:43.696886 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:36:43.696921 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:36:43.696930 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:36:43.696945 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:36:43.696954 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:36:43.696962 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:36:43.696971 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:36:43.696979 | orchestrator | 2026-04-13 00:36:43.696988 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-04-13 00:36:43.696996 | orchestrator | Monday 13 April 2026 00:36:36 +0000 (0:00:00.629) 0:00:49.442 ********** 2026-04-13 00:36:43.697005 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:36:43.697013 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:36:43.697022 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:36:43.697030 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:36:43.697039 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:36:43.697047 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:36:43.697055 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:36:43.697064 | orchestrator | 2026-04-13 00:36:43.697072 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-04-13 00:36:43.697081 | orchestrator | Monday 13 April 2026 00:36:37 +0000 (0:00:00.796) 0:00:50.239 ********** 2026-04-13 00:36:43.697090 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:36:43.697098 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:36:43.697107 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:36:43.697115 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:36:43.697123 | orchestrator | ok: [testbed-manager] 2026-04-13 00:36:43.697132 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:36:43.697140 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:36:43.697149 | orchestrator | 2026-04-13 00:36:43.697157 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-04-13 00:36:43.697166 | orchestrator | Monday 13 April 2026 00:36:38 +0000 (0:00:01.488) 0:00:51.727 ********** 2026-04-13 00:36:43.697175 | orchestrator | ok: [testbed-manager] 2026-04-13 00:36:43.697183 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:36:43.697191 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:36:43.697200 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:36:43.697208 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:36:43.697216 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:36:43.697225 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:36:43.697233 | orchestrator | 2026-04-13 00:36:43.697242 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-04-13 00:36:43.697250 | orchestrator | Monday 13 April 2026 00:36:40 +0000 (0:00:01.231) 0:00:52.959 ********** 2026-04-13 00:36:43.697259 | orchestrator | ok: [testbed-manager] 2026-04-13 00:36:43.697267 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:36:43.697279 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:36:43.697287 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:36:43.697296 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:36:43.697304 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:36:43.697312 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:36:43.697327 | orchestrator | 2026-04-13 00:36:43.697336 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-04-13 00:36:43.697344 | orchestrator | Monday 13 April 2026 00:36:42 +0000 (0:00:02.241) 0:00:55.201 ********** 2026-04-13 00:36:43.697353 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:36:43.697362 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:36:43.697370 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:36:43.697378 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:36:43.697387 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:36:43.697395 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:36:43.697404 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:36:43.697412 | orchestrator | 2026-04-13 00:36:43.697421 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-04-13 00:36:43.697429 | orchestrator | Monday 13 April 2026 00:36:42 +0000 (0:00:00.638) 0:00:55.839 ********** 2026-04-13 00:36:43.697438 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:36:43.697446 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:36:43.697455 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:36:43.697463 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:36:43.697471 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:36:43.697480 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:36:43.697488 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:36:43.697497 | orchestrator | 2026-04-13 00:36:43.697505 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:36:43.697515 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-13 00:36:43.697525 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-13 00:36:43.697541 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-13 00:36:43.996508 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-13 00:36:43.996609 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-13 00:36:43.996626 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-13 00:36:43.996639 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-13 00:36:43.996650 | orchestrator | 2026-04-13 00:36:43.996662 | orchestrator | 2026-04-13 00:36:43.996674 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:36:43.996685 | orchestrator | Monday 13 April 2026 00:36:43 +0000 (0:00:00.747) 0:00:56.587 ********** 2026-04-13 00:36:43.996696 | orchestrator | =============================================================================== 2026-04-13 00:36:43.996707 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.04s 2026-04-13 00:36:43.996718 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.91s 2026-04-13 00:36:43.996729 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.78s 2026-04-13 00:36:43.996762 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.46s 2026-04-13 00:36:43.996773 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.42s 2026-04-13 00:36:43.996784 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.24s 2026-04-13 00:36:43.996795 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.01s 2026-04-13 00:36:43.996832 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.97s 2026-04-13 00:36:43.996844 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.77s 2026-04-13 00:36:43.996855 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.68s 2026-04-13 00:36:43.996865 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.54s 2026-04-13 00:36:43.996876 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.49s 2026-04-13 00:36:43.996886 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.43s 2026-04-13 00:36:43.996951 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.38s 2026-04-13 00:36:43.996962 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.34s 2026-04-13 00:36:43.996973 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.32s 2026-04-13 00:36:43.996983 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.29s 2026-04-13 00:36:43.996994 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 1.23s 2026-04-13 00:36:43.997005 | orchestrator | osism.commons.network : Create required directories --------------------- 1.19s 2026-04-13 00:36:43.997016 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.13s 2026-04-13 00:36:44.196277 | orchestrator | + osism apply wireguard 2026-04-13 00:36:55.555518 | orchestrator | 2026-04-13 00:36:55 | INFO  | Prepare task for execution of wireguard. 2026-04-13 00:36:55.637432 | orchestrator | 2026-04-13 00:36:55 | INFO  | Task e046e3a6-98a3-46f0-bbc0-98a21298573a (wireguard) was prepared for execution. 2026-04-13 00:36:55.637518 | orchestrator | 2026-04-13 00:36:55 | INFO  | It takes a moment until task e046e3a6-98a3-46f0-bbc0-98a21298573a (wireguard) has been started and output is visible here. 2026-04-13 00:37:15.293811 | orchestrator | 2026-04-13 00:37:15.293948 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-04-13 00:37:15.293963 | orchestrator | 2026-04-13 00:37:15.293971 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-04-13 00:37:15.293977 | orchestrator | Monday 13 April 2026 00:36:58 +0000 (0:00:00.315) 0:00:00.315 ********** 2026-04-13 00:37:15.293982 | orchestrator | ok: [testbed-manager] 2026-04-13 00:37:15.293989 | orchestrator | 2026-04-13 00:37:15.293994 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-04-13 00:37:15.294010 | orchestrator | Monday 13 April 2026 00:37:00 +0000 (0:00:01.875) 0:00:02.191 ********** 2026-04-13 00:37:15.294099 | orchestrator | changed: [testbed-manager] 2026-04-13 00:37:15.294107 | orchestrator | 2026-04-13 00:37:15.294113 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-04-13 00:37:15.294119 | orchestrator | Monday 13 April 2026 00:37:07 +0000 (0:00:06.682) 0:00:08.873 ********** 2026-04-13 00:37:15.294125 | orchestrator | changed: [testbed-manager] 2026-04-13 00:37:15.294130 | orchestrator | 2026-04-13 00:37:15.294135 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-04-13 00:37:15.294141 | orchestrator | Monday 13 April 2026 00:37:08 +0000 (0:00:00.533) 0:00:09.406 ********** 2026-04-13 00:37:15.294146 | orchestrator | changed: [testbed-manager] 2026-04-13 00:37:15.294152 | orchestrator | 2026-04-13 00:37:15.294157 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-04-13 00:37:15.294162 | orchestrator | Monday 13 April 2026 00:37:08 +0000 (0:00:00.406) 0:00:09.812 ********** 2026-04-13 00:37:15.294167 | orchestrator | ok: [testbed-manager] 2026-04-13 00:37:15.294173 | orchestrator | 2026-04-13 00:37:15.294178 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-04-13 00:37:15.294183 | orchestrator | Monday 13 April 2026 00:37:08 +0000 (0:00:00.503) 0:00:10.316 ********** 2026-04-13 00:37:15.294189 | orchestrator | ok: [testbed-manager] 2026-04-13 00:37:15.294194 | orchestrator | 2026-04-13 00:37:15.294199 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-04-13 00:37:15.294226 | orchestrator | Monday 13 April 2026 00:37:09 +0000 (0:00:00.385) 0:00:10.701 ********** 2026-04-13 00:37:15.294232 | orchestrator | ok: [testbed-manager] 2026-04-13 00:37:15.294237 | orchestrator | 2026-04-13 00:37:15.294242 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-04-13 00:37:15.294247 | orchestrator | Monday 13 April 2026 00:37:09 +0000 (0:00:00.424) 0:00:11.126 ********** 2026-04-13 00:37:15.294252 | orchestrator | changed: [testbed-manager] 2026-04-13 00:37:15.294257 | orchestrator | 2026-04-13 00:37:15.294263 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-04-13 00:37:15.294268 | orchestrator | Monday 13 April 2026 00:37:10 +0000 (0:00:01.186) 0:00:12.313 ********** 2026-04-13 00:37:15.294273 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-13 00:37:15.294278 | orchestrator | changed: [testbed-manager] 2026-04-13 00:37:15.294283 | orchestrator | 2026-04-13 00:37:15.294288 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-04-13 00:37:15.294294 | orchestrator | Monday 13 April 2026 00:37:11 +0000 (0:00:00.968) 0:00:13.282 ********** 2026-04-13 00:37:15.294311 | orchestrator | changed: [testbed-manager] 2026-04-13 00:37:15.294320 | orchestrator | 2026-04-13 00:37:15.294328 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-04-13 00:37:15.294336 | orchestrator | Monday 13 April 2026 00:37:14 +0000 (0:00:02.112) 0:00:15.394 ********** 2026-04-13 00:37:15.294344 | orchestrator | changed: [testbed-manager] 2026-04-13 00:37:15.294353 | orchestrator | 2026-04-13 00:37:15.294362 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:37:15.294371 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:37:15.294381 | orchestrator | 2026-04-13 00:37:15.294390 | orchestrator | 2026-04-13 00:37:15.294398 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:37:15.294408 | orchestrator | Monday 13 April 2026 00:37:15 +0000 (0:00:00.955) 0:00:16.350 ********** 2026-04-13 00:37:15.294418 | orchestrator | =============================================================================== 2026-04-13 00:37:15.294428 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.68s 2026-04-13 00:37:15.294437 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 2.11s 2026-04-13 00:37:15.294447 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.88s 2026-04-13 00:37:15.294457 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.19s 2026-04-13 00:37:15.294467 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.97s 2026-04-13 00:37:15.294475 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.96s 2026-04-13 00:37:15.294485 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.53s 2026-04-13 00:37:15.294495 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.50s 2026-04-13 00:37:15.294504 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2026-04-13 00:37:15.294513 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.41s 2026-04-13 00:37:15.294522 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.39s 2026-04-13 00:37:15.553352 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-04-13 00:37:15.588584 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-04-13 00:37:15.588675 | orchestrator | Dload Upload Total Spent Left Speed 2026-04-13 00:37:15.661820 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 204 0 --:--:-- --:--:-- --:--:-- 205 2026-04-13 00:37:15.675526 | orchestrator | + osism apply --environment custom workarounds 2026-04-13 00:37:16.952598 | orchestrator | 2026-04-13 00:37:16 | INFO  | Trying to run play workarounds in environment custom 2026-04-13 00:37:27.058277 | orchestrator | 2026-04-13 00:37:27 | INFO  | Prepare task for execution of workarounds. 2026-04-13 00:37:27.135028 | orchestrator | 2026-04-13 00:37:27 | INFO  | Task bc58b63e-e7b1-47cb-9c77-76652f670e59 (workarounds) was prepared for execution. 2026-04-13 00:37:27.135153 | orchestrator | 2026-04-13 00:37:27 | INFO  | It takes a moment until task bc58b63e-e7b1-47cb-9c77-76652f670e59 (workarounds) has been started and output is visible here. 2026-04-13 00:37:52.018184 | orchestrator | 2026-04-13 00:37:52.018298 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 00:37:52.018316 | orchestrator | 2026-04-13 00:37:52.018328 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-04-13 00:37:52.018340 | orchestrator | Monday 13 April 2026 00:37:30 +0000 (0:00:00.177) 0:00:00.177 ********** 2026-04-13 00:37:52.018352 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-04-13 00:37:52.018363 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-04-13 00:37:52.018373 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-04-13 00:37:52.018384 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-04-13 00:37:52.018395 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-04-13 00:37:52.018406 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-04-13 00:37:52.018417 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-04-13 00:37:52.018427 | orchestrator | 2026-04-13 00:37:52.018438 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-04-13 00:37:52.018448 | orchestrator | 2026-04-13 00:37:52.018459 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-13 00:37:52.018470 | orchestrator | Monday 13 April 2026 00:37:31 +0000 (0:00:00.747) 0:00:00.925 ********** 2026-04-13 00:37:52.018481 | orchestrator | ok: [testbed-manager] 2026-04-13 00:37:52.018492 | orchestrator | 2026-04-13 00:37:52.018503 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-04-13 00:37:52.018514 | orchestrator | 2026-04-13 00:37:52.018525 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-13 00:37:52.018535 | orchestrator | Monday 13 April 2026 00:37:33 +0000 (0:00:02.783) 0:00:03.708 ********** 2026-04-13 00:37:52.018546 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:37:52.018557 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:37:52.018568 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:37:52.018578 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:37:52.018589 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:37:52.018600 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:37:52.018610 | orchestrator | 2026-04-13 00:37:52.018636 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-04-13 00:37:52.018648 | orchestrator | 2026-04-13 00:37:52.018660 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-04-13 00:37:52.018670 | orchestrator | Monday 13 April 2026 00:37:36 +0000 (0:00:02.356) 0:00:06.065 ********** 2026-04-13 00:37:52.018682 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-13 00:37:52.018693 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-13 00:37:52.018704 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-13 00:37:52.018715 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-13 00:37:52.018727 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-13 00:37:52.018740 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-13 00:37:52.018775 | orchestrator | 2026-04-13 00:37:52.018788 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-04-13 00:37:52.018802 | orchestrator | Monday 13 April 2026 00:37:37 +0000 (0:00:01.331) 0:00:07.397 ********** 2026-04-13 00:37:52.018841 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:37:52.018857 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:37:52.018869 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:37:52.018882 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:37:52.018895 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:37:52.018907 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:37:52.018919 | orchestrator | 2026-04-13 00:37:52.018933 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-04-13 00:37:52.018945 | orchestrator | Monday 13 April 2026 00:37:41 +0000 (0:00:04.031) 0:00:11.429 ********** 2026-04-13 00:37:52.018958 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:37:52.018970 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:37:52.018982 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:37:52.018995 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:37:52.019007 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:37:52.019021 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:37:52.019033 | orchestrator | 2026-04-13 00:37:52.019046 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-04-13 00:37:52.019058 | orchestrator | 2026-04-13 00:37:52.019071 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-04-13 00:37:52.019084 | orchestrator | Monday 13 April 2026 00:37:42 +0000 (0:00:00.555) 0:00:11.984 ********** 2026-04-13 00:37:52.019096 | orchestrator | changed: [testbed-manager] 2026-04-13 00:37:52.019106 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:37:52.019117 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:37:52.019127 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:37:52.019138 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:37:52.019149 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:37:52.019159 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:37:52.019170 | orchestrator | 2026-04-13 00:37:52.019180 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-04-13 00:37:52.019191 | orchestrator | Monday 13 April 2026 00:37:43 +0000 (0:00:01.770) 0:00:13.755 ********** 2026-04-13 00:37:52.019202 | orchestrator | changed: [testbed-manager] 2026-04-13 00:37:52.019212 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:37:52.019223 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:37:52.019233 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:37:52.019244 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:37:52.019255 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:37:52.019282 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:37:52.019294 | orchestrator | 2026-04-13 00:37:52.019304 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-04-13 00:37:52.019315 | orchestrator | Monday 13 April 2026 00:37:45 +0000 (0:00:01.517) 0:00:15.272 ********** 2026-04-13 00:37:52.019326 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:37:52.019337 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:37:52.019348 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:37:52.019359 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:37:52.019369 | orchestrator | ok: [testbed-manager] 2026-04-13 00:37:52.019380 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:37:52.019390 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:37:52.019401 | orchestrator | 2026-04-13 00:37:52.019412 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-04-13 00:37:52.019423 | orchestrator | Monday 13 April 2026 00:37:47 +0000 (0:00:01.641) 0:00:16.913 ********** 2026-04-13 00:37:52.019433 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:37:52.019444 | orchestrator | changed: [testbed-manager] 2026-04-13 00:37:52.019455 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:37:52.019474 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:37:52.019485 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:37:52.019496 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:37:52.019506 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:37:52.019517 | orchestrator | 2026-04-13 00:37:52.019528 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-04-13 00:37:52.019539 | orchestrator | Monday 13 April 2026 00:37:48 +0000 (0:00:01.577) 0:00:18.491 ********** 2026-04-13 00:37:52.019549 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:37:52.019560 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:37:52.019570 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:37:52.019581 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:37:52.019592 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:37:52.019602 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:37:52.019613 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:37:52.019624 | orchestrator | 2026-04-13 00:37:52.019635 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-04-13 00:37:52.019645 | orchestrator | 2026-04-13 00:37:52.019656 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-04-13 00:37:52.019667 | orchestrator | Monday 13 April 2026 00:37:49 +0000 (0:00:00.803) 0:00:19.295 ********** 2026-04-13 00:37:52.019678 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:37:52.019695 | orchestrator | ok: [testbed-manager] 2026-04-13 00:37:52.019706 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:37:52.019716 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:37:52.019727 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:37:52.019738 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:37:52.019748 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:37:52.019759 | orchestrator | 2026-04-13 00:37:52.019770 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:37:52.019782 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-13 00:37:52.019793 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:37:52.019804 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:37:52.019839 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:37:52.019859 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:37:52.019879 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:37:52.019897 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:37:52.019916 | orchestrator | 2026-04-13 00:37:52.019928 | orchestrator | 2026-04-13 00:37:52.019939 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:37:52.019950 | orchestrator | Monday 13 April 2026 00:37:51 +0000 (0:00:02.493) 0:00:21.789 ********** 2026-04-13 00:37:52.019960 | orchestrator | =============================================================================== 2026-04-13 00:37:52.019971 | orchestrator | Run update-ca-certificates ---------------------------------------------- 4.03s 2026-04-13 00:37:52.019982 | orchestrator | Apply netplan configuration --------------------------------------------- 2.78s 2026-04-13 00:37:52.019993 | orchestrator | Install python3-docker -------------------------------------------------- 2.49s 2026-04-13 00:37:52.020003 | orchestrator | Apply netplan configuration --------------------------------------------- 2.36s 2026-04-13 00:37:52.020023 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.77s 2026-04-13 00:37:52.020034 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.64s 2026-04-13 00:37:52.020044 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.58s 2026-04-13 00:37:52.020055 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.52s 2026-04-13 00:37:52.020066 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.33s 2026-04-13 00:37:52.020076 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.80s 2026-04-13 00:37:52.020087 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.75s 2026-04-13 00:37:52.020106 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.56s 2026-04-13 00:37:52.521550 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-04-13 00:38:03.915016 | orchestrator | 2026-04-13 00:38:03 | INFO  | Prepare task for execution of reboot. 2026-04-13 00:38:03.999560 | orchestrator | 2026-04-13 00:38:04 | INFO  | Task 4e8879ae-f110-427b-8977-23bc2c2d06de (reboot) was prepared for execution. 2026-04-13 00:38:03.999678 | orchestrator | 2026-04-13 00:38:04 | INFO  | It takes a moment until task 4e8879ae-f110-427b-8977-23bc2c2d06de (reboot) has been started and output is visible here. 2026-04-13 00:38:15.618744 | orchestrator | 2026-04-13 00:38:15.618904 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-13 00:38:15.618921 | orchestrator | 2026-04-13 00:38:15.618931 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-13 00:38:15.618940 | orchestrator | Monday 13 April 2026 00:38:07 +0000 (0:00:00.275) 0:00:00.275 ********** 2026-04-13 00:38:15.618949 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:38:15.618959 | orchestrator | 2026-04-13 00:38:15.618968 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-13 00:38:15.618976 | orchestrator | Monday 13 April 2026 00:38:07 +0000 (0:00:00.165) 0:00:00.441 ********** 2026-04-13 00:38:15.618985 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:38:15.618993 | orchestrator | 2026-04-13 00:38:15.619001 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-13 00:38:15.619010 | orchestrator | Monday 13 April 2026 00:38:08 +0000 (0:00:01.283) 0:00:01.724 ********** 2026-04-13 00:38:15.619018 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:38:15.619026 | orchestrator | 2026-04-13 00:38:15.619035 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-13 00:38:15.619043 | orchestrator | 2026-04-13 00:38:15.619051 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-13 00:38:15.619060 | orchestrator | Monday 13 April 2026 00:38:08 +0000 (0:00:00.113) 0:00:01.837 ********** 2026-04-13 00:38:15.619068 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:38:15.619076 | orchestrator | 2026-04-13 00:38:15.619100 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-13 00:38:15.619109 | orchestrator | Monday 13 April 2026 00:38:09 +0000 (0:00:00.094) 0:00:01.932 ********** 2026-04-13 00:38:15.619118 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:38:15.619126 | orchestrator | 2026-04-13 00:38:15.619134 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-13 00:38:15.619143 | orchestrator | Monday 13 April 2026 00:38:10 +0000 (0:00:01.035) 0:00:02.967 ********** 2026-04-13 00:38:15.619151 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:38:15.619159 | orchestrator | 2026-04-13 00:38:15.619168 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-13 00:38:15.619176 | orchestrator | 2026-04-13 00:38:15.619184 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-13 00:38:15.619203 | orchestrator | Monday 13 April 2026 00:38:10 +0000 (0:00:00.125) 0:00:03.093 ********** 2026-04-13 00:38:15.619223 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:38:15.619250 | orchestrator | 2026-04-13 00:38:15.619259 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-13 00:38:15.619267 | orchestrator | Monday 13 April 2026 00:38:10 +0000 (0:00:00.087) 0:00:03.181 ********** 2026-04-13 00:38:15.619275 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:38:15.619283 | orchestrator | 2026-04-13 00:38:15.619291 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-13 00:38:15.619301 | orchestrator | Monday 13 April 2026 00:38:11 +0000 (0:00:01.048) 0:00:04.230 ********** 2026-04-13 00:38:15.619310 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:38:15.619319 | orchestrator | 2026-04-13 00:38:15.619328 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-13 00:38:15.619337 | orchestrator | 2026-04-13 00:38:15.619346 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-13 00:38:15.619356 | orchestrator | Monday 13 April 2026 00:38:11 +0000 (0:00:00.116) 0:00:04.346 ********** 2026-04-13 00:38:15.619366 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:38:15.619375 | orchestrator | 2026-04-13 00:38:15.619384 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-13 00:38:15.619394 | orchestrator | Monday 13 April 2026 00:38:11 +0000 (0:00:00.095) 0:00:04.441 ********** 2026-04-13 00:38:15.619403 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:38:15.619412 | orchestrator | 2026-04-13 00:38:15.619421 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-13 00:38:15.619431 | orchestrator | Monday 13 April 2026 00:38:12 +0000 (0:00:01.038) 0:00:05.480 ********** 2026-04-13 00:38:15.619440 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:38:15.619449 | orchestrator | 2026-04-13 00:38:15.619458 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-13 00:38:15.619467 | orchestrator | 2026-04-13 00:38:15.619476 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-13 00:38:15.619485 | orchestrator | Monday 13 April 2026 00:38:12 +0000 (0:00:00.112) 0:00:05.593 ********** 2026-04-13 00:38:15.619494 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:38:15.619503 | orchestrator | 2026-04-13 00:38:15.619512 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-13 00:38:15.619522 | orchestrator | Monday 13 April 2026 00:38:12 +0000 (0:00:00.242) 0:00:05.835 ********** 2026-04-13 00:38:15.619531 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:38:15.619540 | orchestrator | 2026-04-13 00:38:15.619549 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-13 00:38:15.619558 | orchestrator | Monday 13 April 2026 00:38:13 +0000 (0:00:01.058) 0:00:06.894 ********** 2026-04-13 00:38:15.619567 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:38:15.619576 | orchestrator | 2026-04-13 00:38:15.619585 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-13 00:38:15.619595 | orchestrator | 2026-04-13 00:38:15.619604 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-13 00:38:15.619613 | orchestrator | Monday 13 April 2026 00:38:14 +0000 (0:00:00.137) 0:00:07.032 ********** 2026-04-13 00:38:15.619623 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:38:15.619632 | orchestrator | 2026-04-13 00:38:15.619641 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-13 00:38:15.619651 | orchestrator | Monday 13 April 2026 00:38:14 +0000 (0:00:00.129) 0:00:07.162 ********** 2026-04-13 00:38:15.619661 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:38:15.619669 | orchestrator | 2026-04-13 00:38:15.619676 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-13 00:38:15.619685 | orchestrator | Monday 13 April 2026 00:38:15 +0000 (0:00:01.056) 0:00:08.219 ********** 2026-04-13 00:38:15.619709 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:38:15.619718 | orchestrator | 2026-04-13 00:38:15.619726 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:38:15.619736 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:38:15.619751 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:38:15.619760 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:38:15.619768 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:38:15.619776 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:38:15.619788 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:38:15.619820 | orchestrator | 2026-04-13 00:38:15.619829 | orchestrator | 2026-04-13 00:38:15.619837 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:38:15.619845 | orchestrator | Monday 13 April 2026 00:38:15 +0000 (0:00:00.040) 0:00:08.259 ********** 2026-04-13 00:38:15.619853 | orchestrator | =============================================================================== 2026-04-13 00:38:15.619861 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 6.52s 2026-04-13 00:38:15.619869 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.82s 2026-04-13 00:38:15.619876 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.65s 2026-04-13 00:38:15.847479 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-04-13 00:38:27.253489 | orchestrator | 2026-04-13 00:38:27 | INFO  | Prepare task for execution of wait-for-connection. 2026-04-13 00:38:27.329218 | orchestrator | 2026-04-13 00:38:27 | INFO  | Task e44c98e5-6ebe-44ba-9a67-ccb1e2bc39aa (wait-for-connection) was prepared for execution. 2026-04-13 00:38:27.329306 | orchestrator | 2026-04-13 00:38:27 | INFO  | It takes a moment until task e44c98e5-6ebe-44ba-9a67-ccb1e2bc39aa (wait-for-connection) has been started and output is visible here. 2026-04-13 00:38:42.674508 | orchestrator | 2026-04-13 00:38:42.674640 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-04-13 00:38:42.674670 | orchestrator | 2026-04-13 00:38:42.674692 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-04-13 00:38:42.674709 | orchestrator | Monday 13 April 2026 00:38:30 +0000 (0:00:00.371) 0:00:00.371 ********** 2026-04-13 00:38:42.674727 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:38:42.674745 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:38:42.674763 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:38:42.674862 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:38:42.674882 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:38:42.674902 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:38:42.674919 | orchestrator | 2026-04-13 00:38:42.674932 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:38:42.674954 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:38:42.674974 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:38:42.674993 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:38:42.675013 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:38:42.675032 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:38:42.675086 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:38:42.675099 | orchestrator | 2026-04-13 00:38:42.675110 | orchestrator | 2026-04-13 00:38:42.675121 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:38:42.675131 | orchestrator | Monday 13 April 2026 00:38:42 +0000 (0:00:11.649) 0:00:12.021 ********** 2026-04-13 00:38:42.675142 | orchestrator | =============================================================================== 2026-04-13 00:38:42.675153 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.65s 2026-04-13 00:38:42.878326 | orchestrator | + osism apply hddtemp 2026-04-13 00:38:54.328867 | orchestrator | 2026-04-13 00:38:54 | INFO  | Prepare task for execution of hddtemp. 2026-04-13 00:38:54.407130 | orchestrator | 2026-04-13 00:38:54 | INFO  | Task dcb14c44-04b3-48fc-af20-ee36a90ee61f (hddtemp) was prepared for execution. 2026-04-13 00:38:54.407251 | orchestrator | 2026-04-13 00:38:54 | INFO  | It takes a moment until task dcb14c44-04b3-48fc-af20-ee36a90ee61f (hddtemp) has been started and output is visible here. 2026-04-13 00:39:21.651288 | orchestrator | 2026-04-13 00:39:21.651426 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-04-13 00:39:21.651445 | orchestrator | 2026-04-13 00:39:21.651458 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-04-13 00:39:21.651469 | orchestrator | Monday 13 April 2026 00:38:57 +0000 (0:00:00.349) 0:00:00.349 ********** 2026-04-13 00:39:21.651481 | orchestrator | ok: [testbed-manager] 2026-04-13 00:39:21.651493 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:39:21.651504 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:39:21.651515 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:39:21.651525 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:39:21.651536 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:39:21.651547 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:39:21.651558 | orchestrator | 2026-04-13 00:39:21.651568 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-04-13 00:39:21.651579 | orchestrator | Monday 13 April 2026 00:38:58 +0000 (0:00:00.632) 0:00:00.982 ********** 2026-04-13 00:39:21.651592 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:39:21.651605 | orchestrator | 2026-04-13 00:39:21.651632 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-04-13 00:39:21.651644 | orchestrator | Monday 13 April 2026 00:38:59 +0000 (0:00:01.187) 0:00:02.169 ********** 2026-04-13 00:39:21.651655 | orchestrator | ok: [testbed-manager] 2026-04-13 00:39:21.651665 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:39:21.651676 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:39:21.651686 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:39:21.651697 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:39:21.651708 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:39:21.651718 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:39:21.651729 | orchestrator | 2026-04-13 00:39:21.651807 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-04-13 00:39:21.651821 | orchestrator | Monday 13 April 2026 00:39:02 +0000 (0:00:02.446) 0:00:04.615 ********** 2026-04-13 00:39:21.651833 | orchestrator | changed: [testbed-manager] 2026-04-13 00:39:21.651847 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:39:21.651860 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:39:21.651872 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:39:21.651884 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:39:21.651896 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:39:21.651908 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:39:21.651920 | orchestrator | 2026-04-13 00:39:21.651932 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-04-13 00:39:21.651968 | orchestrator | Monday 13 April 2026 00:39:03 +0000 (0:00:01.006) 0:00:05.621 ********** 2026-04-13 00:39:21.651980 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:39:21.651992 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:39:21.652004 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:39:21.652017 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:39:21.652028 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:39:21.652041 | orchestrator | ok: [testbed-manager] 2026-04-13 00:39:21.652053 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:39:21.652065 | orchestrator | 2026-04-13 00:39:21.652078 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-04-13 00:39:21.652091 | orchestrator | Monday 13 April 2026 00:39:04 +0000 (0:00:01.329) 0:00:06.951 ********** 2026-04-13 00:39:21.652103 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:39:21.652115 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:39:21.652127 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:39:21.652139 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:39:21.652151 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:39:21.652162 | orchestrator | changed: [testbed-manager] 2026-04-13 00:39:21.652172 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:39:21.652183 | orchestrator | 2026-04-13 00:39:21.652201 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-04-13 00:39:21.652222 | orchestrator | Monday 13 April 2026 00:39:04 +0000 (0:00:00.623) 0:00:07.574 ********** 2026-04-13 00:39:21.652241 | orchestrator | changed: [testbed-manager] 2026-04-13 00:39:21.652260 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:39:21.652279 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:39:21.652296 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:39:21.652313 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:39:21.652330 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:39:21.652349 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:39:21.652367 | orchestrator | 2026-04-13 00:39:21.652386 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-04-13 00:39:21.652404 | orchestrator | Monday 13 April 2026 00:39:17 +0000 (0:00:12.665) 0:00:20.240 ********** 2026-04-13 00:39:21.652426 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:39:21.652445 | orchestrator | 2026-04-13 00:39:21.652463 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-04-13 00:39:21.652483 | orchestrator | Monday 13 April 2026 00:39:18 +0000 (0:00:01.213) 0:00:21.454 ********** 2026-04-13 00:39:21.652494 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:39:21.652504 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:39:21.652515 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:39:21.652525 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:39:21.652536 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:39:21.652546 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:39:21.652556 | orchestrator | changed: [testbed-manager] 2026-04-13 00:39:21.652567 | orchestrator | 2026-04-13 00:39:21.652578 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:39:21.652589 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:39:21.652623 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-13 00:39:21.652635 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-13 00:39:21.652646 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-13 00:39:21.652668 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-13 00:39:21.652679 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-13 00:39:21.652689 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-13 00:39:21.652700 | orchestrator | 2026-04-13 00:39:21.652711 | orchestrator | 2026-04-13 00:39:21.652722 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:39:21.652756 | orchestrator | Monday 13 April 2026 00:39:21 +0000 (0:00:02.466) 0:00:23.921 ********** 2026-04-13 00:39:21.652767 | orchestrator | =============================================================================== 2026-04-13 00:39:21.652778 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.67s 2026-04-13 00:39:21.652789 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.47s 2026-04-13 00:39:21.652800 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.45s 2026-04-13 00:39:21.652810 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.33s 2026-04-13 00:39:21.652821 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.21s 2026-04-13 00:39:21.652832 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.19s 2026-04-13 00:39:21.652842 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.01s 2026-04-13 00:39:21.652853 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.63s 2026-04-13 00:39:21.652863 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.62s 2026-04-13 00:39:21.869131 | orchestrator | ++ semver 10.0.0 7.1.1 2026-04-13 00:39:21.937714 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-13 00:39:21.937823 | orchestrator | + sudo systemctl restart manager.service 2026-04-13 00:39:35.533541 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-13 00:39:35.533626 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-13 00:39:35.533641 | orchestrator | + local max_attempts=60 2026-04-13 00:39:35.533652 | orchestrator | + local name=ceph-ansible 2026-04-13 00:39:35.533662 | orchestrator | + local attempt_num=1 2026-04-13 00:39:35.533672 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-13 00:39:35.567796 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-13 00:39:35.567873 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-13 00:39:35.567887 | orchestrator | + sleep 5 2026-04-13 00:39:40.570407 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-13 00:39:40.613626 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-13 00:39:40.613713 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-13 00:39:40.613769 | orchestrator | + sleep 5 2026-04-13 00:39:45.616450 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-13 00:39:45.655037 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-13 00:39:45.655129 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-13 00:39:45.655145 | orchestrator | + sleep 5 2026-04-13 00:39:50.658801 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-13 00:39:50.684310 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-13 00:39:50.684396 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-13 00:39:50.684411 | orchestrator | + sleep 5 2026-04-13 00:39:55.685917 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-13 00:39:55.730829 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-13 00:39:55.730913 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-13 00:39:55.730926 | orchestrator | + sleep 5 2026-04-13 00:40:00.736108 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-13 00:40:00.776635 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-13 00:40:00.776776 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-13 00:40:00.776824 | orchestrator | + sleep 5 2026-04-13 00:40:05.780870 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-13 00:40:05.819422 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-13 00:40:05.819655 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-13 00:40:05.819674 | orchestrator | + sleep 5 2026-04-13 00:40:10.825404 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-13 00:40:10.866322 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-13 00:40:10.866414 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-13 00:40:10.866436 | orchestrator | + sleep 5 2026-04-13 00:40:15.869369 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-13 00:40:15.905617 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-13 00:40:15.905740 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-13 00:40:15.905758 | orchestrator | + sleep 5 2026-04-13 00:40:20.911029 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-13 00:40:20.951329 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-13 00:40:20.951424 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-13 00:40:20.951439 | orchestrator | + sleep 5 2026-04-13 00:40:25.957185 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-13 00:40:25.997839 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-13 00:40:25.997929 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-13 00:40:25.997944 | orchestrator | + sleep 5 2026-04-13 00:40:31.003580 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-13 00:40:31.044743 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-13 00:40:31.044834 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-13 00:40:31.044849 | orchestrator | + sleep 5 2026-04-13 00:40:36.049214 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-13 00:40:36.086304 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-13 00:40:36.086397 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-13 00:40:36.086412 | orchestrator | + sleep 5 2026-04-13 00:40:41.091819 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-13 00:40:41.137181 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-13 00:40:41.137294 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-13 00:40:41.137311 | orchestrator | + local max_attempts=60 2026-04-13 00:40:41.137323 | orchestrator | + local name=kolla-ansible 2026-04-13 00:40:41.137333 | orchestrator | + local attempt_num=1 2026-04-13 00:40:41.138207 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-13 00:40:41.172582 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-13 00:40:41.172702 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-13 00:40:41.172721 | orchestrator | + local max_attempts=60 2026-04-13 00:40:41.172733 | orchestrator | + local name=osism-ansible 2026-04-13 00:40:41.172745 | orchestrator | + local attempt_num=1 2026-04-13 00:40:41.172869 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-13 00:40:41.204743 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-13 00:40:41.204819 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-13 00:40:41.204840 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-13 00:40:41.364457 | orchestrator | ARA in ceph-ansible already disabled. 2026-04-13 00:40:41.524451 | orchestrator | ARA in kolla-ansible already disabled. 2026-04-13 00:40:41.695296 | orchestrator | ARA in osism-ansible already disabled. 2026-04-13 00:40:41.837395 | orchestrator | ARA in osism-kubernetes already disabled. 2026-04-13 00:40:41.838447 | orchestrator | + osism apply gather-facts 2026-04-13 00:40:53.270278 | orchestrator | 2026-04-13 00:40:53 | INFO  | Prepare task for execution of gather-facts. 2026-04-13 00:40:53.382591 | orchestrator | 2026-04-13 00:40:53 | INFO  | Task 4cc43f08-0dac-4746-9608-49c32d85e14f (gather-facts) was prepared for execution. 2026-04-13 00:40:53.382784 | orchestrator | 2026-04-13 00:40:53 | INFO  | It takes a moment until task 4cc43f08-0dac-4746-9608-49c32d85e14f (gather-facts) has been started and output is visible here. 2026-04-13 00:41:03.054774 | orchestrator | 2026-04-13 00:41:03.054979 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-13 00:41:03.054998 | orchestrator | 2026-04-13 00:41:03.055011 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-13 00:41:03.055056 | orchestrator | Monday 13 April 2026 00:40:56 +0000 (0:00:00.260) 0:00:00.260 ********** 2026-04-13 00:41:03.055075 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:41:03.055095 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:41:03.055112 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:41:03.055130 | orchestrator | ok: [testbed-manager] 2026-04-13 00:41:03.055148 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:41:03.055161 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:41:03.055173 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:41:03.055184 | orchestrator | 2026-04-13 00:41:03.055195 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-13 00:41:03.055205 | orchestrator | 2026-04-13 00:41:03.055218 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-13 00:41:03.055235 | orchestrator | Monday 13 April 2026 00:41:02 +0000 (0:00:05.520) 0:00:05.780 ********** 2026-04-13 00:41:03.055262 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:41:03.055283 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:41:03.055301 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:41:03.055318 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:41:03.055336 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:41:03.055355 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:41:03.055372 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:41:03.055391 | orchestrator | 2026-04-13 00:41:03.055410 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:41:03.055428 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-13 00:41:03.055446 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-13 00:41:03.055465 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-13 00:41:03.055485 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-13 00:41:03.055503 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-13 00:41:03.055522 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-13 00:41:03.055541 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-13 00:41:03.055560 | orchestrator | 2026-04-13 00:41:03.055580 | orchestrator | 2026-04-13 00:41:03.055597 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:41:03.055615 | orchestrator | Monday 13 April 2026 00:41:02 +0000 (0:00:00.646) 0:00:06.426 ********** 2026-04-13 00:41:03.055635 | orchestrator | =============================================================================== 2026-04-13 00:41:03.055684 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.52s 2026-04-13 00:41:03.055704 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.65s 2026-04-13 00:41:03.259836 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-04-13 00:41:03.274855 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-04-13 00:41:03.292458 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-04-13 00:41:03.308563 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-04-13 00:41:03.323450 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-04-13 00:41:03.334377 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-04-13 00:41:03.353456 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-04-13 00:41:03.364900 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-04-13 00:41:03.382210 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-04-13 00:41:03.399359 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-04-13 00:41:03.417786 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-04-13 00:41:03.439857 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-04-13 00:41:03.456841 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-04-13 00:41:03.475881 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-04-13 00:41:03.489325 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-04-13 00:41:03.506705 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-04-13 00:41:03.521100 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-04-13 00:41:03.540912 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-04-13 00:41:03.555790 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-04-13 00:41:03.567648 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amphora-image.sh /usr/local/bin/bootstrap-octavia 2026-04-13 00:41:03.586332 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-04-13 00:41:03.602276 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-04-13 00:41:03.616796 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-04-13 00:41:03.630570 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-13 00:41:03.968426 | orchestrator | ok: Runtime: 0:24:16.995723 2026-04-13 00:41:04.077686 | 2026-04-13 00:41:04.077817 | TASK [Deploy services] 2026-04-13 00:41:04.631582 | orchestrator | skipping: Conditional result was False 2026-04-13 00:41:04.649150 | 2026-04-13 00:41:04.649310 | TASK [Deploy in a nutshell] 2026-04-13 00:41:05.363613 | orchestrator | + set -e 2026-04-13 00:41:05.363721 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-13 00:41:05.363733 | orchestrator | ++ export INTERACTIVE=false 2026-04-13 00:41:05.363741 | orchestrator | ++ INTERACTIVE=false 2026-04-13 00:41:05.363746 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-13 00:41:05.363750 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-13 00:41:05.363755 | orchestrator | + source /opt/manager-vars.sh 2026-04-13 00:41:05.363781 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-13 00:41:05.363792 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-13 00:41:05.363797 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-13 00:41:05.363803 | orchestrator | ++ CEPH_VERSION=reef 2026-04-13 00:41:05.363808 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-13 00:41:05.363814 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-13 00:41:05.363818 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-13 00:41:05.363825 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-13 00:41:05.363829 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-13 00:41:05.363835 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-13 00:41:05.363839 | orchestrator | ++ export ARA=false 2026-04-13 00:41:05.363843 | orchestrator | ++ ARA=false 2026-04-13 00:41:05.363847 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-13 00:41:05.363851 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-13 00:41:05.363855 | orchestrator | ++ export TEMPEST=true 2026-04-13 00:41:05.363859 | orchestrator | ++ TEMPEST=true 2026-04-13 00:41:05.363862 | orchestrator | ++ export IS_ZUUL=true 2026-04-13 00:41:05.363866 | orchestrator | ++ IS_ZUUL=true 2026-04-13 00:41:05.363870 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2026-04-13 00:41:05.363876 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2026-04-13 00:41:05.363879 | orchestrator | ++ export EXTERNAL_API=false 2026-04-13 00:41:05.363883 | orchestrator | ++ EXTERNAL_API=false 2026-04-13 00:41:05.363887 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-13 00:41:05.363891 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-13 00:41:05.363895 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-13 00:41:05.363898 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-13 00:41:05.363902 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-13 00:41:05.363906 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-13 00:41:05.363977 | orchestrator | 2026-04-13 00:41:05.363983 | orchestrator | # PULL IMAGES 2026-04-13 00:41:05.363987 | orchestrator | 2026-04-13 00:41:05.363993 | orchestrator | + echo 2026-04-13 00:41:05.363997 | orchestrator | + echo '# PULL IMAGES' 2026-04-13 00:41:05.364001 | orchestrator | + echo 2026-04-13 00:41:05.365369 | orchestrator | ++ semver 10.0.0 7.0.0 2026-04-13 00:41:05.418005 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-13 00:41:05.418126 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-04-13 00:41:06.650759 | orchestrator | 2026-04-13 00:41:06 | INFO  | Trying to run play pull-images in environment custom 2026-04-13 00:41:16.694583 | orchestrator | 2026-04-13 00:41:16 | INFO  | Prepare task for execution of pull-images. 2026-04-13 00:41:16.771126 | orchestrator | 2026-04-13 00:41:16 | INFO  | Task 684ef751-63aa-467a-8433-4e4e366e57b4 (pull-images) was prepared for execution. 2026-04-13 00:41:16.771223 | orchestrator | 2026-04-13 00:41:16 | INFO  | Task 684ef751-63aa-467a-8433-4e4e366e57b4 is running in background. No more output. Check ARA for logs. 2026-04-13 00:41:18.389284 | orchestrator | 2026-04-13 00:41:18 | INFO  | Trying to run play wipe-partitions in environment custom 2026-04-13 00:41:28.426387 | orchestrator | 2026-04-13 00:41:28 | INFO  | Prepare task for execution of wipe-partitions. 2026-04-13 00:41:28.496220 | orchestrator | 2026-04-13 00:41:28 | INFO  | Task 217dbedf-b047-4d5a-a57d-0659f1d8b420 (wipe-partitions) was prepared for execution. 2026-04-13 00:41:28.496308 | orchestrator | 2026-04-13 00:41:28 | INFO  | It takes a moment until task 217dbedf-b047-4d5a-a57d-0659f1d8b420 (wipe-partitions) has been started and output is visible here. 2026-04-13 00:41:41.386719 | orchestrator | 2026-04-13 00:41:41.386815 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-04-13 00:41:41.386826 | orchestrator | 2026-04-13 00:41:41.386835 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-04-13 00:41:41.386850 | orchestrator | Monday 13 April 2026 00:41:31 +0000 (0:00:00.166) 0:00:00.166 ********** 2026-04-13 00:41:41.386861 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:41:41.386894 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:41:41.386903 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:41:41.386911 | orchestrator | 2026-04-13 00:41:41.386919 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-04-13 00:41:41.386927 | orchestrator | Monday 13 April 2026 00:41:32 +0000 (0:00:00.972) 0:00:01.139 ********** 2026-04-13 00:41:41.386935 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:41:41.386947 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:41:41.386955 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:41:41.386963 | orchestrator | 2026-04-13 00:41:41.386971 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-04-13 00:41:41.386979 | orchestrator | Monday 13 April 2026 00:41:32 +0000 (0:00:00.261) 0:00:01.400 ********** 2026-04-13 00:41:41.386987 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:41:41.386996 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:41:41.387004 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:41:41.387011 | orchestrator | 2026-04-13 00:41:41.387020 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-04-13 00:41:41.387028 | orchestrator | Monday 13 April 2026 00:41:33 +0000 (0:00:00.584) 0:00:01.985 ********** 2026-04-13 00:41:41.387035 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:41:41.387043 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:41:41.387051 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:41:41.387058 | orchestrator | 2026-04-13 00:41:41.387066 | orchestrator | TASK [Check device availability] *********************************************** 2026-04-13 00:41:41.387074 | orchestrator | Monday 13 April 2026 00:41:33 +0000 (0:00:00.296) 0:00:02.281 ********** 2026-04-13 00:41:41.387082 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-13 00:41:41.387093 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-13 00:41:41.387101 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-13 00:41:41.387109 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-13 00:41:41.387117 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-13 00:41:41.387125 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-13 00:41:41.387132 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-13 00:41:41.387140 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-13 00:41:41.387148 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-13 00:41:41.387156 | orchestrator | 2026-04-13 00:41:41.387165 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-04-13 00:41:41.387173 | orchestrator | Monday 13 April 2026 00:41:35 +0000 (0:00:01.348) 0:00:03.630 ********** 2026-04-13 00:41:41.387193 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-04-13 00:41:41.387211 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-04-13 00:41:41.387219 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-04-13 00:41:41.387226 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-04-13 00:41:41.387234 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-04-13 00:41:41.387242 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-04-13 00:41:41.387249 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-04-13 00:41:41.387257 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-04-13 00:41:41.387270 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-04-13 00:41:41.387278 | orchestrator | 2026-04-13 00:41:41.387286 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-04-13 00:41:41.387294 | orchestrator | Monday 13 April 2026 00:41:36 +0000 (0:00:01.332) 0:00:04.963 ********** 2026-04-13 00:41:41.387302 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-13 00:41:41.387309 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-13 00:41:41.387317 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-13 00:41:41.387325 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-13 00:41:41.387333 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-13 00:41:41.387391 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-13 00:41:41.387399 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-13 00:41:41.387407 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-13 00:41:41.387415 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-13 00:41:41.387423 | orchestrator | 2026-04-13 00:41:41.387431 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-04-13 00:41:41.387439 | orchestrator | Monday 13 April 2026 00:41:39 +0000 (0:00:03.227) 0:00:08.190 ********** 2026-04-13 00:41:41.387446 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:41:41.387454 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:41:41.387462 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:41:41.387470 | orchestrator | 2026-04-13 00:41:41.387477 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-04-13 00:41:41.387485 | orchestrator | Monday 13 April 2026 00:41:40 +0000 (0:00:00.701) 0:00:08.892 ********** 2026-04-13 00:41:41.387493 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:41:41.387501 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:41:41.387509 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:41:41.387516 | orchestrator | 2026-04-13 00:41:41.387525 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:41:41.387534 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:41:41.387543 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:41:41.387566 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:41:41.387574 | orchestrator | 2026-04-13 00:41:41.387582 | orchestrator | 2026-04-13 00:41:41.387590 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:41:41.387597 | orchestrator | Monday 13 April 2026 00:41:41 +0000 (0:00:00.801) 0:00:09.693 ********** 2026-04-13 00:41:41.387605 | orchestrator | =============================================================================== 2026-04-13 00:41:41.387613 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 3.23s 2026-04-13 00:41:41.387640 | orchestrator | Check device availability ----------------------------------------------- 1.35s 2026-04-13 00:41:41.387648 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.33s 2026-04-13 00:41:41.387656 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.97s 2026-04-13 00:41:41.387664 | orchestrator | Request device events from the kernel ----------------------------------- 0.80s 2026-04-13 00:41:41.387672 | orchestrator | Reload udev rules ------------------------------------------------------- 0.70s 2026-04-13 00:41:41.387680 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.58s 2026-04-13 00:41:41.387687 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.30s 2026-04-13 00:41:41.387695 | orchestrator | Remove all rook related logical devices --------------------------------- 0.26s 2026-04-13 00:41:53.022744 | orchestrator | 2026-04-13 00:41:53 | INFO  | Prepare task for execution of facts. 2026-04-13 00:41:53.093073 | orchestrator | 2026-04-13 00:41:53 | INFO  | Task ac0badb6-058e-4e72-96af-079d89a6ec45 (facts) was prepared for execution. 2026-04-13 00:41:53.093178 | orchestrator | 2026-04-13 00:41:53 | INFO  | It takes a moment until task ac0badb6-058e-4e72-96af-079d89a6ec45 (facts) has been started and output is visible here. 2026-04-13 00:42:05.775497 | orchestrator | 2026-04-13 00:42:05.775706 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-13 00:42:05.775737 | orchestrator | 2026-04-13 00:42:05.775758 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-13 00:42:05.775810 | orchestrator | Monday 13 April 2026 00:41:56 +0000 (0:00:00.389) 0:00:00.389 ********** 2026-04-13 00:42:05.775823 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:42:05.775836 | orchestrator | ok: [testbed-manager] 2026-04-13 00:42:05.775846 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:42:05.775857 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:42:05.775867 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:42:05.775878 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:42:05.775888 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:42:05.775899 | orchestrator | 2026-04-13 00:42:05.775909 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-13 00:42:05.775920 | orchestrator | Monday 13 April 2026 00:41:58 +0000 (0:00:01.428) 0:00:01.817 ********** 2026-04-13 00:42:05.775931 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:42:05.775942 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:42:05.775953 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:42:05.775983 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:42:05.775994 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:05.776005 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:05.776018 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:05.776030 | orchestrator | 2026-04-13 00:42:05.776043 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-13 00:42:05.776056 | orchestrator | 2026-04-13 00:42:05.776068 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-13 00:42:05.776081 | orchestrator | Monday 13 April 2026 00:41:59 +0000 (0:00:01.283) 0:00:03.101 ********** 2026-04-13 00:42:05.776095 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:42:05.776109 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:42:05.776121 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:42:05.776134 | orchestrator | ok: [testbed-manager] 2026-04-13 00:42:05.776146 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:42:05.776159 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:42:05.776171 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:42:05.776183 | orchestrator | 2026-04-13 00:42:05.776196 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-13 00:42:05.776208 | orchestrator | 2026-04-13 00:42:05.776221 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-13 00:42:05.776233 | orchestrator | Monday 13 April 2026 00:42:04 +0000 (0:00:05.574) 0:00:08.676 ********** 2026-04-13 00:42:05.776245 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:42:05.776257 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:42:05.776269 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:42:05.776282 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:42:05.776294 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:05.776307 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:05.776321 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:05.776333 | orchestrator | 2026-04-13 00:42:05.776346 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:42:05.776360 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:42:05.776374 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:42:05.776385 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:42:05.776396 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:42:05.776406 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:42:05.776417 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:42:05.776436 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:42:05.776446 | orchestrator | 2026-04-13 00:42:05.776457 | orchestrator | 2026-04-13 00:42:05.776468 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:42:05.776479 | orchestrator | Monday 13 April 2026 00:42:05 +0000 (0:00:00.511) 0:00:09.187 ********** 2026-04-13 00:42:05.776490 | orchestrator | =============================================================================== 2026-04-13 00:42:05.776500 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.57s 2026-04-13 00:42:05.776511 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.43s 2026-04-13 00:42:05.776521 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.28s 2026-04-13 00:42:05.776532 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2026-04-13 00:42:07.405266 | orchestrator | 2026-04-13 00:42:07 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-04-13 00:42:07.488295 | orchestrator | 2026-04-13 00:42:07 | INFO  | Task 99340b8c-6c60-498b-8628-7ab6a670cad1 (ceph-configure-lvm-volumes) was prepared for execution. 2026-04-13 00:42:07.488387 | orchestrator | 2026-04-13 00:42:07 | INFO  | It takes a moment until task 99340b8c-6c60-498b-8628-7ab6a670cad1 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-04-13 00:42:19.713025 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-13 00:42:19.713141 | orchestrator | 2.16.14 2026-04-13 00:42:19.713160 | orchestrator | 2026-04-13 00:42:19.713172 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-13 00:42:19.713184 | orchestrator | 2026-04-13 00:42:19.713196 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-13 00:42:19.713207 | orchestrator | Monday 13 April 2026 00:42:12 +0000 (0:00:00.314) 0:00:00.314 ********** 2026-04-13 00:42:19.713218 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-13 00:42:19.713240 | orchestrator | 2026-04-13 00:42:19.713252 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-13 00:42:19.713263 | orchestrator | Monday 13 April 2026 00:42:12 +0000 (0:00:00.238) 0:00:00.553 ********** 2026-04-13 00:42:19.713274 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:42:19.713286 | orchestrator | 2026-04-13 00:42:19.713297 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:19.713308 | orchestrator | Monday 13 April 2026 00:42:12 +0000 (0:00:00.214) 0:00:00.767 ********** 2026-04-13 00:42:19.713319 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-13 00:42:19.713330 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-13 00:42:19.713340 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-13 00:42:19.713351 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-13 00:42:19.713362 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-13 00:42:19.713373 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-13 00:42:19.713384 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-13 00:42:19.713394 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-13 00:42:19.713405 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-13 00:42:19.713416 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-13 00:42:19.713450 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-13 00:42:19.713462 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-13 00:42:19.713473 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-13 00:42:19.713484 | orchestrator | 2026-04-13 00:42:19.713495 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:19.713506 | orchestrator | Monday 13 April 2026 00:42:12 +0000 (0:00:00.358) 0:00:01.126 ********** 2026-04-13 00:42:19.713516 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:19.713527 | orchestrator | 2026-04-13 00:42:19.713538 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:19.713549 | orchestrator | Monday 13 April 2026 00:42:13 +0000 (0:00:00.503) 0:00:01.629 ********** 2026-04-13 00:42:19.713560 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:19.713570 | orchestrator | 2026-04-13 00:42:19.713581 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:19.713632 | orchestrator | Monday 13 April 2026 00:42:13 +0000 (0:00:00.205) 0:00:01.835 ********** 2026-04-13 00:42:19.713651 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:19.713662 | orchestrator | 2026-04-13 00:42:19.713673 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:19.713684 | orchestrator | Monday 13 April 2026 00:42:13 +0000 (0:00:00.190) 0:00:02.026 ********** 2026-04-13 00:42:19.713695 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:19.713706 | orchestrator | 2026-04-13 00:42:19.713717 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:19.713728 | orchestrator | Monday 13 April 2026 00:42:14 +0000 (0:00:00.215) 0:00:02.242 ********** 2026-04-13 00:42:19.713739 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:19.713750 | orchestrator | 2026-04-13 00:42:19.713760 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:19.713771 | orchestrator | Monday 13 April 2026 00:42:14 +0000 (0:00:00.207) 0:00:02.450 ********** 2026-04-13 00:42:19.713782 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:19.713792 | orchestrator | 2026-04-13 00:42:19.713803 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:19.713814 | orchestrator | Monday 13 April 2026 00:42:14 +0000 (0:00:00.218) 0:00:02.669 ********** 2026-04-13 00:42:19.713838 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:19.713849 | orchestrator | 2026-04-13 00:42:19.713871 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:19.713882 | orchestrator | Monday 13 April 2026 00:42:14 +0000 (0:00:00.201) 0:00:02.870 ********** 2026-04-13 00:42:19.713892 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:19.713903 | orchestrator | 2026-04-13 00:42:19.713914 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:19.713931 | orchestrator | Monday 13 April 2026 00:42:14 +0000 (0:00:00.198) 0:00:03.069 ********** 2026-04-13 00:42:19.713946 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5) 2026-04-13 00:42:19.713967 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5) 2026-04-13 00:42:19.713996 | orchestrator | 2026-04-13 00:42:19.714194 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:19.714273 | orchestrator | Monday 13 April 2026 00:42:15 +0000 (0:00:00.402) 0:00:03.472 ********** 2026-04-13 00:42:19.714309 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_70b2b286-75d2-4918-b809-b0d3c77d8089) 2026-04-13 00:42:19.714328 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_70b2b286-75d2-4918-b809-b0d3c77d8089) 2026-04-13 00:42:19.714346 | orchestrator | 2026-04-13 00:42:19.714365 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:19.714382 | orchestrator | Monday 13 April 2026 00:42:15 +0000 (0:00:00.419) 0:00:03.892 ********** 2026-04-13 00:42:19.714421 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e58cc4cd-c100-42fd-a854-9a07c2c5ceb1) 2026-04-13 00:42:19.714439 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e58cc4cd-c100-42fd-a854-9a07c2c5ceb1) 2026-04-13 00:42:19.714456 | orchestrator | 2026-04-13 00:42:19.714474 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:19.714492 | orchestrator | Monday 13 April 2026 00:42:16 +0000 (0:00:00.679) 0:00:04.572 ********** 2026-04-13 00:42:19.714511 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1ff476bc-ae0b-4cfd-96fa-c57a101f59cb) 2026-04-13 00:42:19.714527 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1ff476bc-ae0b-4cfd-96fa-c57a101f59cb) 2026-04-13 00:42:19.714544 | orchestrator | 2026-04-13 00:42:19.714560 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:19.714578 | orchestrator | Monday 13 April 2026 00:42:17 +0000 (0:00:00.695) 0:00:05.267 ********** 2026-04-13 00:42:19.714624 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-13 00:42:19.714644 | orchestrator | 2026-04-13 00:42:19.714662 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:19.714680 | orchestrator | Monday 13 April 2026 00:42:17 +0000 (0:00:00.801) 0:00:06.068 ********** 2026-04-13 00:42:19.714699 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-13 00:42:19.714716 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-13 00:42:19.714737 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-13 00:42:19.714755 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-13 00:42:19.714774 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-13 00:42:19.714793 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-13 00:42:19.714812 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-13 00:42:19.714831 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-13 00:42:19.714849 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-13 00:42:19.714869 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-13 00:42:19.714889 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-13 00:42:19.714908 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-13 00:42:19.714926 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-13 00:42:19.714944 | orchestrator | 2026-04-13 00:42:19.714962 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:19.714981 | orchestrator | Monday 13 April 2026 00:42:18 +0000 (0:00:00.384) 0:00:06.453 ********** 2026-04-13 00:42:19.714999 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:19.715011 | orchestrator | 2026-04-13 00:42:19.715022 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:19.715033 | orchestrator | Monday 13 April 2026 00:42:18 +0000 (0:00:00.206) 0:00:06.659 ********** 2026-04-13 00:42:19.715044 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:19.715055 | orchestrator | 2026-04-13 00:42:19.715065 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:19.715076 | orchestrator | Monday 13 April 2026 00:42:18 +0000 (0:00:00.189) 0:00:06.848 ********** 2026-04-13 00:42:19.715087 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:19.715097 | orchestrator | 2026-04-13 00:42:19.715108 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:19.715129 | orchestrator | Monday 13 April 2026 00:42:18 +0000 (0:00:00.213) 0:00:07.062 ********** 2026-04-13 00:42:19.715139 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:19.715150 | orchestrator | 2026-04-13 00:42:19.715161 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:19.715172 | orchestrator | Monday 13 April 2026 00:42:19 +0000 (0:00:00.193) 0:00:07.256 ********** 2026-04-13 00:42:19.715190 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:19.715208 | orchestrator | 2026-04-13 00:42:19.715225 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:19.715243 | orchestrator | Monday 13 April 2026 00:42:19 +0000 (0:00:00.195) 0:00:07.452 ********** 2026-04-13 00:42:19.715260 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:19.715276 | orchestrator | 2026-04-13 00:42:19.715307 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:19.715325 | orchestrator | Monday 13 April 2026 00:42:19 +0000 (0:00:00.194) 0:00:07.646 ********** 2026-04-13 00:42:19.715344 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:19.715363 | orchestrator | 2026-04-13 00:42:19.715400 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:27.624857 | orchestrator | Monday 13 April 2026 00:42:19 +0000 (0:00:00.196) 0:00:07.842 ********** 2026-04-13 00:42:27.624966 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:27.624983 | orchestrator | 2026-04-13 00:42:27.624996 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:27.625007 | orchestrator | Monday 13 April 2026 00:42:19 +0000 (0:00:00.204) 0:00:08.047 ********** 2026-04-13 00:42:27.625018 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-13 00:42:27.625029 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-13 00:42:27.625041 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-13 00:42:27.625051 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-13 00:42:27.625062 | orchestrator | 2026-04-13 00:42:27.625073 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:27.625084 | orchestrator | Monday 13 April 2026 00:42:21 +0000 (0:00:01.101) 0:00:09.149 ********** 2026-04-13 00:42:27.625095 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:27.625105 | orchestrator | 2026-04-13 00:42:27.625116 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:27.625127 | orchestrator | Monday 13 April 2026 00:42:21 +0000 (0:00:00.198) 0:00:09.348 ********** 2026-04-13 00:42:27.625137 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:27.625148 | orchestrator | 2026-04-13 00:42:27.625159 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:27.625169 | orchestrator | Monday 13 April 2026 00:42:21 +0000 (0:00:00.205) 0:00:09.553 ********** 2026-04-13 00:42:27.625180 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:27.625190 | orchestrator | 2026-04-13 00:42:27.625201 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:27.625212 | orchestrator | Monday 13 April 2026 00:42:21 +0000 (0:00:00.213) 0:00:09.767 ********** 2026-04-13 00:42:27.625223 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:27.625233 | orchestrator | 2026-04-13 00:42:27.625244 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-13 00:42:27.625255 | orchestrator | Monday 13 April 2026 00:42:21 +0000 (0:00:00.212) 0:00:09.980 ********** 2026-04-13 00:42:27.625266 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-04-13 00:42:27.625276 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-04-13 00:42:27.625287 | orchestrator | 2026-04-13 00:42:27.625298 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-13 00:42:27.625309 | orchestrator | Monday 13 April 2026 00:42:22 +0000 (0:00:00.184) 0:00:10.165 ********** 2026-04-13 00:42:27.625320 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:27.625355 | orchestrator | 2026-04-13 00:42:27.625368 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-13 00:42:27.625379 | orchestrator | Monday 13 April 2026 00:42:22 +0000 (0:00:00.132) 0:00:10.297 ********** 2026-04-13 00:42:27.625389 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:27.625400 | orchestrator | 2026-04-13 00:42:27.625412 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-13 00:42:27.625424 | orchestrator | Monday 13 April 2026 00:42:22 +0000 (0:00:00.126) 0:00:10.423 ********** 2026-04-13 00:42:27.625436 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:27.625448 | orchestrator | 2026-04-13 00:42:27.625460 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-13 00:42:27.625475 | orchestrator | Monday 13 April 2026 00:42:22 +0000 (0:00:00.146) 0:00:10.570 ********** 2026-04-13 00:42:27.625487 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:42:27.625500 | orchestrator | 2026-04-13 00:42:27.625513 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-13 00:42:27.625525 | orchestrator | Monday 13 April 2026 00:42:22 +0000 (0:00:00.132) 0:00:10.702 ********** 2026-04-13 00:42:27.625537 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a'}}) 2026-04-13 00:42:27.625550 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '100799fe-f0b8-5d68-80c9-d39d0aace7f9'}}) 2026-04-13 00:42:27.625562 | orchestrator | 2026-04-13 00:42:27.625575 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-13 00:42:27.625662 | orchestrator | Monday 13 April 2026 00:42:22 +0000 (0:00:00.178) 0:00:10.880 ********** 2026-04-13 00:42:27.625678 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a'}})  2026-04-13 00:42:27.625705 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '100799fe-f0b8-5d68-80c9-d39d0aace7f9'}})  2026-04-13 00:42:27.625718 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:27.625730 | orchestrator | 2026-04-13 00:42:27.625742 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-13 00:42:27.625755 | orchestrator | Monday 13 April 2026 00:42:22 +0000 (0:00:00.148) 0:00:11.028 ********** 2026-04-13 00:42:27.625767 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a'}})  2026-04-13 00:42:27.625777 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '100799fe-f0b8-5d68-80c9-d39d0aace7f9'}})  2026-04-13 00:42:27.625788 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:27.625798 | orchestrator | 2026-04-13 00:42:27.625809 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-13 00:42:27.625820 | orchestrator | Monday 13 April 2026 00:42:23 +0000 (0:00:00.174) 0:00:11.203 ********** 2026-04-13 00:42:27.625830 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a'}})  2026-04-13 00:42:27.625860 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '100799fe-f0b8-5d68-80c9-d39d0aace7f9'}})  2026-04-13 00:42:27.625872 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:27.625883 | orchestrator | 2026-04-13 00:42:27.625894 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-13 00:42:27.625905 | orchestrator | Monday 13 April 2026 00:42:23 +0000 (0:00:00.373) 0:00:11.577 ********** 2026-04-13 00:42:27.625915 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:42:27.625926 | orchestrator | 2026-04-13 00:42:27.625937 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-13 00:42:27.625947 | orchestrator | Monday 13 April 2026 00:42:23 +0000 (0:00:00.135) 0:00:11.712 ********** 2026-04-13 00:42:27.625958 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:42:27.625968 | orchestrator | 2026-04-13 00:42:27.625990 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-13 00:42:27.626001 | orchestrator | Monday 13 April 2026 00:42:23 +0000 (0:00:00.130) 0:00:11.843 ********** 2026-04-13 00:42:27.626012 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:27.626122 | orchestrator | 2026-04-13 00:42:27.626135 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-13 00:42:27.626146 | orchestrator | Monday 13 April 2026 00:42:23 +0000 (0:00:00.126) 0:00:11.970 ********** 2026-04-13 00:42:27.626157 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:27.626167 | orchestrator | 2026-04-13 00:42:27.626178 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-13 00:42:27.626189 | orchestrator | Monday 13 April 2026 00:42:23 +0000 (0:00:00.128) 0:00:12.098 ********** 2026-04-13 00:42:27.626209 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:27.626221 | orchestrator | 2026-04-13 00:42:27.626231 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-13 00:42:27.626242 | orchestrator | Monday 13 April 2026 00:42:24 +0000 (0:00:00.135) 0:00:12.234 ********** 2026-04-13 00:42:27.626253 | orchestrator | ok: [testbed-node-3] => { 2026-04-13 00:42:27.626264 | orchestrator |  "ceph_osd_devices": { 2026-04-13 00:42:27.626275 | orchestrator |  "sdb": { 2026-04-13 00:42:27.626286 | orchestrator |  "osd_lvm_uuid": "9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a" 2026-04-13 00:42:27.626297 | orchestrator |  }, 2026-04-13 00:42:27.626308 | orchestrator |  "sdc": { 2026-04-13 00:42:27.626318 | orchestrator |  "osd_lvm_uuid": "100799fe-f0b8-5d68-80c9-d39d0aace7f9" 2026-04-13 00:42:27.626329 | orchestrator |  } 2026-04-13 00:42:27.626340 | orchestrator |  } 2026-04-13 00:42:27.626351 | orchestrator | } 2026-04-13 00:42:27.626362 | orchestrator | 2026-04-13 00:42:27.626372 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-13 00:42:27.626383 | orchestrator | Monday 13 April 2026 00:42:24 +0000 (0:00:00.136) 0:00:12.370 ********** 2026-04-13 00:42:27.626394 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:27.626405 | orchestrator | 2026-04-13 00:42:27.626415 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-13 00:42:27.626426 | orchestrator | Monday 13 April 2026 00:42:24 +0000 (0:00:00.121) 0:00:12.492 ********** 2026-04-13 00:42:27.626437 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:27.626447 | orchestrator | 2026-04-13 00:42:27.626458 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-13 00:42:27.626469 | orchestrator | Monday 13 April 2026 00:42:24 +0000 (0:00:00.126) 0:00:12.618 ********** 2026-04-13 00:42:27.626479 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:42:27.626490 | orchestrator | 2026-04-13 00:42:27.626501 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-13 00:42:27.626511 | orchestrator | Monday 13 April 2026 00:42:24 +0000 (0:00:00.153) 0:00:12.772 ********** 2026-04-13 00:42:27.626522 | orchestrator | changed: [testbed-node-3] => { 2026-04-13 00:42:27.626533 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-13 00:42:27.626544 | orchestrator |  "ceph_osd_devices": { 2026-04-13 00:42:27.626555 | orchestrator |  "sdb": { 2026-04-13 00:42:27.626566 | orchestrator |  "osd_lvm_uuid": "9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a" 2026-04-13 00:42:27.626577 | orchestrator |  }, 2026-04-13 00:42:27.626607 | orchestrator |  "sdc": { 2026-04-13 00:42:27.626619 | orchestrator |  "osd_lvm_uuid": "100799fe-f0b8-5d68-80c9-d39d0aace7f9" 2026-04-13 00:42:27.626630 | orchestrator |  } 2026-04-13 00:42:27.626641 | orchestrator |  }, 2026-04-13 00:42:27.626651 | orchestrator |  "lvm_volumes": [ 2026-04-13 00:42:27.626662 | orchestrator |  { 2026-04-13 00:42:27.626673 | orchestrator |  "data": "osd-block-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a", 2026-04-13 00:42:27.626684 | orchestrator |  "data_vg": "ceph-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a" 2026-04-13 00:42:27.626703 | orchestrator |  }, 2026-04-13 00:42:27.626714 | orchestrator |  { 2026-04-13 00:42:27.626724 | orchestrator |  "data": "osd-block-100799fe-f0b8-5d68-80c9-d39d0aace7f9", 2026-04-13 00:42:27.626735 | orchestrator |  "data_vg": "ceph-100799fe-f0b8-5d68-80c9-d39d0aace7f9" 2026-04-13 00:42:27.626746 | orchestrator |  } 2026-04-13 00:42:27.626757 | orchestrator |  ] 2026-04-13 00:42:27.626768 | orchestrator |  } 2026-04-13 00:42:27.626778 | orchestrator | } 2026-04-13 00:42:27.626789 | orchestrator | 2026-04-13 00:42:27.626800 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-13 00:42:27.626811 | orchestrator | Monday 13 April 2026 00:42:24 +0000 (0:00:00.219) 0:00:12.991 ********** 2026-04-13 00:42:27.626821 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-13 00:42:27.626832 | orchestrator | 2026-04-13 00:42:27.626843 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-13 00:42:27.626853 | orchestrator | 2026-04-13 00:42:27.626864 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-13 00:42:27.626875 | orchestrator | Monday 13 April 2026 00:42:27 +0000 (0:00:02.277) 0:00:15.268 ********** 2026-04-13 00:42:27.626885 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-13 00:42:27.626896 | orchestrator | 2026-04-13 00:42:27.626907 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-13 00:42:27.626917 | orchestrator | Monday 13 April 2026 00:42:27 +0000 (0:00:00.246) 0:00:15.515 ********** 2026-04-13 00:42:27.626928 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:42:27.626939 | orchestrator | 2026-04-13 00:42:27.626959 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:35.691332 | orchestrator | Monday 13 April 2026 00:42:27 +0000 (0:00:00.238) 0:00:15.753 ********** 2026-04-13 00:42:35.691477 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-13 00:42:35.691499 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-13 00:42:35.691513 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-13 00:42:35.691527 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-13 00:42:35.691542 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-13 00:42:35.691556 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-13 00:42:35.691569 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-13 00:42:35.691642 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-13 00:42:35.691663 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-13 00:42:35.691677 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-13 00:42:35.691691 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-13 00:42:35.691700 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-13 00:42:35.691708 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-13 00:42:35.691716 | orchestrator | 2026-04-13 00:42:35.691724 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:35.691732 | orchestrator | Monday 13 April 2026 00:42:28 +0000 (0:00:00.386) 0:00:16.140 ********** 2026-04-13 00:42:35.691740 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:35.691750 | orchestrator | 2026-04-13 00:42:35.691758 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:35.691766 | orchestrator | Monday 13 April 2026 00:42:28 +0000 (0:00:00.180) 0:00:16.320 ********** 2026-04-13 00:42:35.691773 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:35.691802 | orchestrator | 2026-04-13 00:42:35.691811 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:35.691818 | orchestrator | Monday 13 April 2026 00:42:28 +0000 (0:00:00.205) 0:00:16.526 ********** 2026-04-13 00:42:35.691826 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:35.691834 | orchestrator | 2026-04-13 00:42:35.691842 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:35.691849 | orchestrator | Monday 13 April 2026 00:42:28 +0000 (0:00:00.191) 0:00:16.718 ********** 2026-04-13 00:42:35.691858 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:35.691867 | orchestrator | 2026-04-13 00:42:35.691876 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:35.691884 | orchestrator | Monday 13 April 2026 00:42:28 +0000 (0:00:00.184) 0:00:16.903 ********** 2026-04-13 00:42:35.691893 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:35.691902 | orchestrator | 2026-04-13 00:42:35.691912 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:35.691920 | orchestrator | Monday 13 April 2026 00:42:28 +0000 (0:00:00.211) 0:00:17.114 ********** 2026-04-13 00:42:35.691929 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:35.691938 | orchestrator | 2026-04-13 00:42:35.691947 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:35.691956 | orchestrator | Monday 13 April 2026 00:42:29 +0000 (0:00:00.629) 0:00:17.743 ********** 2026-04-13 00:42:35.691964 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:35.691973 | orchestrator | 2026-04-13 00:42:35.691982 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:35.691991 | orchestrator | Monday 13 April 2026 00:42:29 +0000 (0:00:00.220) 0:00:17.963 ********** 2026-04-13 00:42:35.692000 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:35.692009 | orchestrator | 2026-04-13 00:42:35.692018 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:35.692027 | orchestrator | Monday 13 April 2026 00:42:30 +0000 (0:00:00.200) 0:00:18.164 ********** 2026-04-13 00:42:35.692036 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191) 2026-04-13 00:42:35.692045 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191) 2026-04-13 00:42:35.692053 | orchestrator | 2026-04-13 00:42:35.692061 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:35.692069 | orchestrator | Monday 13 April 2026 00:42:30 +0000 (0:00:00.446) 0:00:18.610 ********** 2026-04-13 00:42:35.692077 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_28faf471-35fc-493f-ba87-763b98edc4d7) 2026-04-13 00:42:35.692085 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_28faf471-35fc-493f-ba87-763b98edc4d7) 2026-04-13 00:42:35.692092 | orchestrator | 2026-04-13 00:42:35.692100 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:35.692114 | orchestrator | Monday 13 April 2026 00:42:30 +0000 (0:00:00.436) 0:00:19.046 ********** 2026-04-13 00:42:35.692122 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2d6b0ac7-37bd-44a3-98bf-24bee37418a9) 2026-04-13 00:42:35.692130 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2d6b0ac7-37bd-44a3-98bf-24bee37418a9) 2026-04-13 00:42:35.692138 | orchestrator | 2026-04-13 00:42:35.692145 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:35.692170 | orchestrator | Monday 13 April 2026 00:42:31 +0000 (0:00:00.421) 0:00:19.467 ********** 2026-04-13 00:42:35.692178 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_40b67a78-e903-4b7b-9416-2311a13eed69) 2026-04-13 00:42:35.692187 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_40b67a78-e903-4b7b-9416-2311a13eed69) 2026-04-13 00:42:35.692194 | orchestrator | 2026-04-13 00:42:35.692202 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:35.692218 | orchestrator | Monday 13 April 2026 00:42:31 +0000 (0:00:00.439) 0:00:19.906 ********** 2026-04-13 00:42:35.692225 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-13 00:42:35.692233 | orchestrator | 2026-04-13 00:42:35.692241 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:35.692248 | orchestrator | Monday 13 April 2026 00:42:32 +0000 (0:00:00.329) 0:00:20.235 ********** 2026-04-13 00:42:35.692256 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-13 00:42:35.692264 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-13 00:42:35.692272 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-13 00:42:35.692279 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-13 00:42:35.692287 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-13 00:42:35.692295 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-13 00:42:35.692302 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-13 00:42:35.692310 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-13 00:42:35.692324 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-13 00:42:35.692336 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-13 00:42:35.692351 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-13 00:42:35.692364 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-13 00:42:35.692376 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-13 00:42:35.692389 | orchestrator | 2026-04-13 00:42:35.692400 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:35.692413 | orchestrator | Monday 13 April 2026 00:42:32 +0000 (0:00:00.390) 0:00:20.626 ********** 2026-04-13 00:42:35.692427 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:35.692440 | orchestrator | 2026-04-13 00:42:35.692454 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:35.692467 | orchestrator | Monday 13 April 2026 00:42:32 +0000 (0:00:00.202) 0:00:20.828 ********** 2026-04-13 00:42:35.692480 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:35.692489 | orchestrator | 2026-04-13 00:42:35.692497 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:35.692504 | orchestrator | Monday 13 April 2026 00:42:33 +0000 (0:00:00.688) 0:00:21.517 ********** 2026-04-13 00:42:35.692512 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:35.692520 | orchestrator | 2026-04-13 00:42:35.692528 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:35.692535 | orchestrator | Monday 13 April 2026 00:42:33 +0000 (0:00:00.206) 0:00:21.724 ********** 2026-04-13 00:42:35.692543 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:35.692551 | orchestrator | 2026-04-13 00:42:35.692558 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:35.692566 | orchestrator | Monday 13 April 2026 00:42:33 +0000 (0:00:00.209) 0:00:21.934 ********** 2026-04-13 00:42:35.692573 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:35.692633 | orchestrator | 2026-04-13 00:42:35.692643 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:35.692650 | orchestrator | Monday 13 April 2026 00:42:34 +0000 (0:00:00.211) 0:00:22.145 ********** 2026-04-13 00:42:35.692658 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:35.692666 | orchestrator | 2026-04-13 00:42:35.692673 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:35.692689 | orchestrator | Monday 13 April 2026 00:42:34 +0000 (0:00:00.216) 0:00:22.362 ********** 2026-04-13 00:42:35.692697 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:35.692705 | orchestrator | 2026-04-13 00:42:35.692713 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:35.692720 | orchestrator | Monday 13 April 2026 00:42:34 +0000 (0:00:00.253) 0:00:22.615 ********** 2026-04-13 00:42:35.692728 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:35.692736 | orchestrator | 2026-04-13 00:42:35.692744 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:35.692751 | orchestrator | Monday 13 April 2026 00:42:34 +0000 (0:00:00.264) 0:00:22.880 ********** 2026-04-13 00:42:35.692765 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-13 00:42:35.692774 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-13 00:42:35.692782 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-13 00:42:35.692790 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-13 00:42:35.692798 | orchestrator | 2026-04-13 00:42:35.692805 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:35.692813 | orchestrator | Monday 13 April 2026 00:42:35 +0000 (0:00:00.815) 0:00:23.695 ********** 2026-04-13 00:42:35.692821 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:42.616201 | orchestrator | 2026-04-13 00:42:42.616320 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:42.616338 | orchestrator | Monday 13 April 2026 00:42:35 +0000 (0:00:00.222) 0:00:23.918 ********** 2026-04-13 00:42:42.616350 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:42.616362 | orchestrator | 2026-04-13 00:42:42.616374 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:42.616386 | orchestrator | Monday 13 April 2026 00:42:35 +0000 (0:00:00.204) 0:00:24.122 ********** 2026-04-13 00:42:42.616397 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:42.616407 | orchestrator | 2026-04-13 00:42:42.616418 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:42.616429 | orchestrator | Monday 13 April 2026 00:42:36 +0000 (0:00:00.222) 0:00:24.345 ********** 2026-04-13 00:42:42.616440 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:42.616451 | orchestrator | 2026-04-13 00:42:42.616462 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-13 00:42:42.616473 | orchestrator | Monday 13 April 2026 00:42:36 +0000 (0:00:00.214) 0:00:24.559 ********** 2026-04-13 00:42:42.616484 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-04-13 00:42:42.616494 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-04-13 00:42:42.616505 | orchestrator | 2026-04-13 00:42:42.616517 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-13 00:42:42.616528 | orchestrator | Monday 13 April 2026 00:42:36 +0000 (0:00:00.414) 0:00:24.974 ********** 2026-04-13 00:42:42.616538 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:42.616549 | orchestrator | 2026-04-13 00:42:42.616560 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-13 00:42:42.616571 | orchestrator | Monday 13 April 2026 00:42:36 +0000 (0:00:00.157) 0:00:25.131 ********** 2026-04-13 00:42:42.616648 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:42.616660 | orchestrator | 2026-04-13 00:42:42.616671 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-13 00:42:42.616683 | orchestrator | Monday 13 April 2026 00:42:37 +0000 (0:00:00.157) 0:00:25.288 ********** 2026-04-13 00:42:42.616693 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:42.616704 | orchestrator | 2026-04-13 00:42:42.616715 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-13 00:42:42.616728 | orchestrator | Monday 13 April 2026 00:42:37 +0000 (0:00:00.166) 0:00:25.454 ********** 2026-04-13 00:42:42.616740 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:42:42.616777 | orchestrator | 2026-04-13 00:42:42.616790 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-13 00:42:42.616802 | orchestrator | Monday 13 April 2026 00:42:37 +0000 (0:00:00.153) 0:00:25.607 ********** 2026-04-13 00:42:42.616815 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '586ba51f-dba7-5dcd-8710-1804179cab86'}}) 2026-04-13 00:42:42.616827 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '971aa970-5a40-5da7-9620-8f2c789358d2'}}) 2026-04-13 00:42:42.616840 | orchestrator | 2026-04-13 00:42:42.616852 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-13 00:42:42.616864 | orchestrator | Monday 13 April 2026 00:42:37 +0000 (0:00:00.166) 0:00:25.774 ********** 2026-04-13 00:42:42.616877 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '586ba51f-dba7-5dcd-8710-1804179cab86'}})  2026-04-13 00:42:42.616891 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '971aa970-5a40-5da7-9620-8f2c789358d2'}})  2026-04-13 00:42:42.616903 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:42.616915 | orchestrator | 2026-04-13 00:42:42.616928 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-13 00:42:42.616940 | orchestrator | Monday 13 April 2026 00:42:37 +0000 (0:00:00.170) 0:00:25.945 ********** 2026-04-13 00:42:42.616953 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '586ba51f-dba7-5dcd-8710-1804179cab86'}})  2026-04-13 00:42:42.616965 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '971aa970-5a40-5da7-9620-8f2c789358d2'}})  2026-04-13 00:42:42.616978 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:42.616991 | orchestrator | 2026-04-13 00:42:42.617003 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-13 00:42:42.617016 | orchestrator | Monday 13 April 2026 00:42:37 +0000 (0:00:00.157) 0:00:26.103 ********** 2026-04-13 00:42:42.617028 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '586ba51f-dba7-5dcd-8710-1804179cab86'}})  2026-04-13 00:42:42.617040 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '971aa970-5a40-5da7-9620-8f2c789358d2'}})  2026-04-13 00:42:42.617053 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:42.617065 | orchestrator | 2026-04-13 00:42:42.617077 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-13 00:42:42.617089 | orchestrator | Monday 13 April 2026 00:42:38 +0000 (0:00:00.175) 0:00:26.278 ********** 2026-04-13 00:42:42.617100 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:42:42.617111 | orchestrator | 2026-04-13 00:42:42.617122 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-13 00:42:42.617132 | orchestrator | Monday 13 April 2026 00:42:38 +0000 (0:00:00.150) 0:00:26.429 ********** 2026-04-13 00:42:42.617143 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:42:42.617154 | orchestrator | 2026-04-13 00:42:42.617165 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-13 00:42:42.617176 | orchestrator | Monday 13 April 2026 00:42:38 +0000 (0:00:00.150) 0:00:26.580 ********** 2026-04-13 00:42:42.617205 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:42.617218 | orchestrator | 2026-04-13 00:42:42.617246 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-13 00:42:42.617257 | orchestrator | Monday 13 April 2026 00:42:38 +0000 (0:00:00.156) 0:00:26.736 ********** 2026-04-13 00:42:42.617268 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:42.617279 | orchestrator | 2026-04-13 00:42:42.617290 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-13 00:42:42.617301 | orchestrator | Monday 13 April 2026 00:42:38 +0000 (0:00:00.391) 0:00:27.127 ********** 2026-04-13 00:42:42.617312 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:42.617322 | orchestrator | 2026-04-13 00:42:42.617340 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-13 00:42:42.617351 | orchestrator | Monday 13 April 2026 00:42:39 +0000 (0:00:00.127) 0:00:27.255 ********** 2026-04-13 00:42:42.617362 | orchestrator | ok: [testbed-node-4] => { 2026-04-13 00:42:42.617373 | orchestrator |  "ceph_osd_devices": { 2026-04-13 00:42:42.617384 | orchestrator |  "sdb": { 2026-04-13 00:42:42.617395 | orchestrator |  "osd_lvm_uuid": "586ba51f-dba7-5dcd-8710-1804179cab86" 2026-04-13 00:42:42.617406 | orchestrator |  }, 2026-04-13 00:42:42.617417 | orchestrator |  "sdc": { 2026-04-13 00:42:42.617427 | orchestrator |  "osd_lvm_uuid": "971aa970-5a40-5da7-9620-8f2c789358d2" 2026-04-13 00:42:42.617438 | orchestrator |  } 2026-04-13 00:42:42.617449 | orchestrator |  } 2026-04-13 00:42:42.617460 | orchestrator | } 2026-04-13 00:42:42.617471 | orchestrator | 2026-04-13 00:42:42.617482 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-13 00:42:42.617493 | orchestrator | Monday 13 April 2026 00:42:39 +0000 (0:00:00.146) 0:00:27.401 ********** 2026-04-13 00:42:42.617504 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:42.617515 | orchestrator | 2026-04-13 00:42:42.617526 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-13 00:42:42.617536 | orchestrator | Monday 13 April 2026 00:42:39 +0000 (0:00:00.144) 0:00:27.546 ********** 2026-04-13 00:42:42.617547 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:42.617558 | orchestrator | 2026-04-13 00:42:42.617569 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-13 00:42:42.617600 | orchestrator | Monday 13 April 2026 00:42:39 +0000 (0:00:00.139) 0:00:27.686 ********** 2026-04-13 00:42:42.617611 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:42:42.617622 | orchestrator | 2026-04-13 00:42:42.617633 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-13 00:42:42.617644 | orchestrator | Monday 13 April 2026 00:42:39 +0000 (0:00:00.129) 0:00:27.816 ********** 2026-04-13 00:42:42.617654 | orchestrator | changed: [testbed-node-4] => { 2026-04-13 00:42:42.617665 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-13 00:42:42.617676 | orchestrator |  "ceph_osd_devices": { 2026-04-13 00:42:42.617687 | orchestrator |  "sdb": { 2026-04-13 00:42:42.617698 | orchestrator |  "osd_lvm_uuid": "586ba51f-dba7-5dcd-8710-1804179cab86" 2026-04-13 00:42:42.617709 | orchestrator |  }, 2026-04-13 00:42:42.617720 | orchestrator |  "sdc": { 2026-04-13 00:42:42.617731 | orchestrator |  "osd_lvm_uuid": "971aa970-5a40-5da7-9620-8f2c789358d2" 2026-04-13 00:42:42.617742 | orchestrator |  } 2026-04-13 00:42:42.617753 | orchestrator |  }, 2026-04-13 00:42:42.617763 | orchestrator |  "lvm_volumes": [ 2026-04-13 00:42:42.617774 | orchestrator |  { 2026-04-13 00:42:42.617785 | orchestrator |  "data": "osd-block-586ba51f-dba7-5dcd-8710-1804179cab86", 2026-04-13 00:42:42.617796 | orchestrator |  "data_vg": "ceph-586ba51f-dba7-5dcd-8710-1804179cab86" 2026-04-13 00:42:42.617807 | orchestrator |  }, 2026-04-13 00:42:42.617818 | orchestrator |  { 2026-04-13 00:42:42.617829 | orchestrator |  "data": "osd-block-971aa970-5a40-5da7-9620-8f2c789358d2", 2026-04-13 00:42:42.617840 | orchestrator |  "data_vg": "ceph-971aa970-5a40-5da7-9620-8f2c789358d2" 2026-04-13 00:42:42.617851 | orchestrator |  } 2026-04-13 00:42:42.617861 | orchestrator |  ] 2026-04-13 00:42:42.617872 | orchestrator |  } 2026-04-13 00:42:42.617883 | orchestrator | } 2026-04-13 00:42:42.617894 | orchestrator | 2026-04-13 00:42:42.617904 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-13 00:42:42.617915 | orchestrator | Monday 13 April 2026 00:42:39 +0000 (0:00:00.219) 0:00:28.035 ********** 2026-04-13 00:42:42.617926 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-13 00:42:42.617937 | orchestrator | 2026-04-13 00:42:42.617948 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-13 00:42:42.617965 | orchestrator | 2026-04-13 00:42:42.617976 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-13 00:42:42.617987 | orchestrator | Monday 13 April 2026 00:42:41 +0000 (0:00:01.231) 0:00:29.267 ********** 2026-04-13 00:42:42.617998 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-13 00:42:42.618009 | orchestrator | 2026-04-13 00:42:42.618115 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-13 00:42:42.618131 | orchestrator | Monday 13 April 2026 00:42:41 +0000 (0:00:00.483) 0:00:29.750 ********** 2026-04-13 00:42:42.618142 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:42:42.618153 | orchestrator | 2026-04-13 00:42:42.618163 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:42.618174 | orchestrator | Monday 13 April 2026 00:42:42 +0000 (0:00:00.690) 0:00:30.441 ********** 2026-04-13 00:42:42.618185 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-13 00:42:42.618195 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-13 00:42:42.618206 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-13 00:42:42.618217 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-13 00:42:42.618228 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-13 00:42:42.618247 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-13 00:42:51.159558 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-13 00:42:51.159705 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-13 00:42:51.159718 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-13 00:42:51.159726 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-13 00:42:51.159733 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-13 00:42:51.159740 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-13 00:42:51.159748 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-13 00:42:51.159754 | orchestrator | 2026-04-13 00:42:51.159762 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:51.159771 | orchestrator | Monday 13 April 2026 00:42:42 +0000 (0:00:00.405) 0:00:30.846 ********** 2026-04-13 00:42:51.159778 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:51.159785 | orchestrator | 2026-04-13 00:42:51.159792 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:51.159798 | orchestrator | Monday 13 April 2026 00:42:42 +0000 (0:00:00.199) 0:00:31.045 ********** 2026-04-13 00:42:51.159804 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:51.159810 | orchestrator | 2026-04-13 00:42:51.159833 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:51.159840 | orchestrator | Monday 13 April 2026 00:42:43 +0000 (0:00:00.203) 0:00:31.248 ********** 2026-04-13 00:42:51.159846 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:51.159853 | orchestrator | 2026-04-13 00:42:51.159860 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:51.159867 | orchestrator | Monday 13 April 2026 00:42:43 +0000 (0:00:00.210) 0:00:31.459 ********** 2026-04-13 00:42:51.159873 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:51.159880 | orchestrator | 2026-04-13 00:42:51.159890 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:51.159897 | orchestrator | Monday 13 April 2026 00:42:43 +0000 (0:00:00.198) 0:00:31.658 ********** 2026-04-13 00:42:51.159904 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:51.159931 | orchestrator | 2026-04-13 00:42:51.159938 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:51.159960 | orchestrator | Monday 13 April 2026 00:42:43 +0000 (0:00:00.192) 0:00:31.851 ********** 2026-04-13 00:42:51.159967 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:51.159973 | orchestrator | 2026-04-13 00:42:51.159979 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:51.159995 | orchestrator | Monday 13 April 2026 00:42:43 +0000 (0:00:00.180) 0:00:32.031 ********** 2026-04-13 00:42:51.160002 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:51.160008 | orchestrator | 2026-04-13 00:42:51.160015 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:51.160022 | orchestrator | Monday 13 April 2026 00:42:44 +0000 (0:00:00.184) 0:00:32.215 ********** 2026-04-13 00:42:51.160029 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:51.160036 | orchestrator | 2026-04-13 00:42:51.160042 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:51.160049 | orchestrator | Monday 13 April 2026 00:42:44 +0000 (0:00:00.199) 0:00:32.414 ********** 2026-04-13 00:42:51.160056 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b) 2026-04-13 00:42:51.160065 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b) 2026-04-13 00:42:51.160071 | orchestrator | 2026-04-13 00:42:51.160078 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:51.160084 | orchestrator | Monday 13 April 2026 00:42:44 +0000 (0:00:00.640) 0:00:33.055 ********** 2026-04-13 00:42:51.160092 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5e205b26-74df-4a0d-a6b0-fd65d84e1df5) 2026-04-13 00:42:51.160100 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5e205b26-74df-4a0d-a6b0-fd65d84e1df5) 2026-04-13 00:42:51.160107 | orchestrator | 2026-04-13 00:42:51.160115 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:51.160122 | orchestrator | Monday 13 April 2026 00:42:45 +0000 (0:00:00.892) 0:00:33.948 ********** 2026-04-13 00:42:51.160129 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3fbef31d-44a1-4ae9-9145-86033c094687) 2026-04-13 00:42:51.160136 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3fbef31d-44a1-4ae9-9145-86033c094687) 2026-04-13 00:42:51.160144 | orchestrator | 2026-04-13 00:42:51.160151 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:51.160159 | orchestrator | Monday 13 April 2026 00:42:46 +0000 (0:00:00.447) 0:00:34.395 ********** 2026-04-13 00:42:51.160165 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d506fd3a-4f98-4a08-a2bf-c3638f88932b) 2026-04-13 00:42:51.160173 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d506fd3a-4f98-4a08-a2bf-c3638f88932b) 2026-04-13 00:42:51.160180 | orchestrator | 2026-04-13 00:42:51.160186 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:42:51.160193 | orchestrator | Monday 13 April 2026 00:42:46 +0000 (0:00:00.426) 0:00:34.822 ********** 2026-04-13 00:42:51.160200 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-13 00:42:51.160215 | orchestrator | 2026-04-13 00:42:51.160222 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:51.160248 | orchestrator | Monday 13 April 2026 00:42:47 +0000 (0:00:00.336) 0:00:35.158 ********** 2026-04-13 00:42:51.160255 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-13 00:42:51.160262 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-13 00:42:51.160269 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-13 00:42:51.160276 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-13 00:42:51.160291 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-13 00:42:51.160298 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-13 00:42:51.160305 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-13 00:42:51.160312 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-13 00:42:51.160319 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-13 00:42:51.160335 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-13 00:42:51.160343 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-13 00:42:51.160350 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-13 00:42:51.160357 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-13 00:42:51.160364 | orchestrator | 2026-04-13 00:42:51.160371 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:51.160378 | orchestrator | Monday 13 April 2026 00:42:47 +0000 (0:00:00.425) 0:00:35.584 ********** 2026-04-13 00:42:51.160395 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:51.160401 | orchestrator | 2026-04-13 00:42:51.160409 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:51.160415 | orchestrator | Monday 13 April 2026 00:42:47 +0000 (0:00:00.211) 0:00:35.795 ********** 2026-04-13 00:42:51.160422 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:51.160428 | orchestrator | 2026-04-13 00:42:51.160435 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:51.160442 | orchestrator | Monday 13 April 2026 00:42:47 +0000 (0:00:00.190) 0:00:35.986 ********** 2026-04-13 00:42:51.160448 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:51.160455 | orchestrator | 2026-04-13 00:42:51.160461 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:51.160468 | orchestrator | Monday 13 April 2026 00:42:48 +0000 (0:00:00.211) 0:00:36.197 ********** 2026-04-13 00:42:51.160474 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:51.160481 | orchestrator | 2026-04-13 00:42:51.160487 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:51.160494 | orchestrator | Monday 13 April 2026 00:42:48 +0000 (0:00:00.247) 0:00:36.444 ********** 2026-04-13 00:42:51.160500 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:51.160506 | orchestrator | 2026-04-13 00:42:51.160513 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:51.160519 | orchestrator | Monday 13 April 2026 00:42:48 +0000 (0:00:00.196) 0:00:36.641 ********** 2026-04-13 00:42:51.160526 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:51.160532 | orchestrator | 2026-04-13 00:42:51.160539 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:51.160546 | orchestrator | Monday 13 April 2026 00:42:49 +0000 (0:00:00.709) 0:00:37.350 ********** 2026-04-13 00:42:51.160552 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:51.160558 | orchestrator | 2026-04-13 00:42:51.160591 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:51.160599 | orchestrator | Monday 13 April 2026 00:42:49 +0000 (0:00:00.218) 0:00:37.568 ********** 2026-04-13 00:42:51.160605 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:51.160612 | orchestrator | 2026-04-13 00:42:51.160618 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:51.160626 | orchestrator | Monday 13 April 2026 00:42:49 +0000 (0:00:00.212) 0:00:37.781 ********** 2026-04-13 00:42:51.160633 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-13 00:42:51.160640 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-13 00:42:51.160654 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-13 00:42:51.160661 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-13 00:42:51.160668 | orchestrator | 2026-04-13 00:42:51.160675 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:51.160681 | orchestrator | Monday 13 April 2026 00:42:50 +0000 (0:00:00.675) 0:00:38.457 ********** 2026-04-13 00:42:51.160688 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:51.160694 | orchestrator | 2026-04-13 00:42:51.160700 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:51.160707 | orchestrator | Monday 13 April 2026 00:42:50 +0000 (0:00:00.221) 0:00:38.679 ********** 2026-04-13 00:42:51.160714 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:51.160720 | orchestrator | 2026-04-13 00:42:51.160727 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:51.160734 | orchestrator | Monday 13 April 2026 00:42:50 +0000 (0:00:00.211) 0:00:38.890 ********** 2026-04-13 00:42:51.160741 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:51.160747 | orchestrator | 2026-04-13 00:42:51.160753 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:42:51.160760 | orchestrator | Monday 13 April 2026 00:42:50 +0000 (0:00:00.188) 0:00:39.079 ********** 2026-04-13 00:42:51.160767 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:51.160773 | orchestrator | 2026-04-13 00:42:51.160786 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-13 00:42:55.704359 | orchestrator | Monday 13 April 2026 00:42:51 +0000 (0:00:00.209) 0:00:39.289 ********** 2026-04-13 00:42:55.704452 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-04-13 00:42:55.704465 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-04-13 00:42:55.704475 | orchestrator | 2026-04-13 00:42:55.704498 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-13 00:42:55.704508 | orchestrator | Monday 13 April 2026 00:42:51 +0000 (0:00:00.222) 0:00:39.511 ********** 2026-04-13 00:42:55.704517 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:55.704525 | orchestrator | 2026-04-13 00:42:55.704534 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-13 00:42:55.704543 | orchestrator | Monday 13 April 2026 00:42:51 +0000 (0:00:00.138) 0:00:39.649 ********** 2026-04-13 00:42:55.704552 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:55.704560 | orchestrator | 2026-04-13 00:42:55.704597 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-13 00:42:55.704613 | orchestrator | Monday 13 April 2026 00:42:51 +0000 (0:00:00.192) 0:00:39.842 ********** 2026-04-13 00:42:55.704628 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:55.704642 | orchestrator | 2026-04-13 00:42:55.704656 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-13 00:42:55.704665 | orchestrator | Monday 13 April 2026 00:42:51 +0000 (0:00:00.139) 0:00:39.981 ********** 2026-04-13 00:42:55.704674 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:42:55.704684 | orchestrator | 2026-04-13 00:42:55.704693 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-13 00:42:55.704701 | orchestrator | Monday 13 April 2026 00:42:52 +0000 (0:00:00.371) 0:00:40.353 ********** 2026-04-13 00:42:55.704711 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd9f8332f-65b5-5ad5-8d64-0b4e5e7cc000'}}) 2026-04-13 00:42:55.704720 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7331b6c9-9d3b-5dac-8499-53ee0940f196'}}) 2026-04-13 00:42:55.704728 | orchestrator | 2026-04-13 00:42:55.704737 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-13 00:42:55.704745 | orchestrator | Monday 13 April 2026 00:42:52 +0000 (0:00:00.191) 0:00:40.545 ********** 2026-04-13 00:42:55.704754 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd9f8332f-65b5-5ad5-8d64-0b4e5e7cc000'}})  2026-04-13 00:42:55.704785 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7331b6c9-9d3b-5dac-8499-53ee0940f196'}})  2026-04-13 00:42:55.704794 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:55.704802 | orchestrator | 2026-04-13 00:42:55.704810 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-13 00:42:55.704819 | orchestrator | Monday 13 April 2026 00:42:52 +0000 (0:00:00.193) 0:00:40.738 ********** 2026-04-13 00:42:55.704827 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd9f8332f-65b5-5ad5-8d64-0b4e5e7cc000'}})  2026-04-13 00:42:55.704836 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7331b6c9-9d3b-5dac-8499-53ee0940f196'}})  2026-04-13 00:42:55.704844 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:55.704853 | orchestrator | 2026-04-13 00:42:55.704861 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-13 00:42:55.704870 | orchestrator | Monday 13 April 2026 00:42:52 +0000 (0:00:00.195) 0:00:40.934 ********** 2026-04-13 00:42:55.704878 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd9f8332f-65b5-5ad5-8d64-0b4e5e7cc000'}})  2026-04-13 00:42:55.704887 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7331b6c9-9d3b-5dac-8499-53ee0940f196'}})  2026-04-13 00:42:55.704895 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:55.704905 | orchestrator | 2026-04-13 00:42:55.704915 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-13 00:42:55.704925 | orchestrator | Monday 13 April 2026 00:42:52 +0000 (0:00:00.171) 0:00:41.105 ********** 2026-04-13 00:42:55.704934 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:42:55.704944 | orchestrator | 2026-04-13 00:42:55.704954 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-13 00:42:55.704963 | orchestrator | Monday 13 April 2026 00:42:53 +0000 (0:00:00.124) 0:00:41.229 ********** 2026-04-13 00:42:55.704974 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:42:55.704983 | orchestrator | 2026-04-13 00:42:55.704993 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-13 00:42:55.705002 | orchestrator | Monday 13 April 2026 00:42:53 +0000 (0:00:00.153) 0:00:41.382 ********** 2026-04-13 00:42:55.705012 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:55.705021 | orchestrator | 2026-04-13 00:42:55.705031 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-13 00:42:55.705041 | orchestrator | Monday 13 April 2026 00:42:53 +0000 (0:00:00.143) 0:00:41.526 ********** 2026-04-13 00:42:55.705050 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:55.705060 | orchestrator | 2026-04-13 00:42:55.705070 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-13 00:42:55.705079 | orchestrator | Monday 13 April 2026 00:42:53 +0000 (0:00:00.152) 0:00:41.678 ********** 2026-04-13 00:42:55.705089 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:55.705098 | orchestrator | 2026-04-13 00:42:55.705108 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-13 00:42:55.705117 | orchestrator | Monday 13 April 2026 00:42:53 +0000 (0:00:00.136) 0:00:41.814 ********** 2026-04-13 00:42:55.705129 | orchestrator | ok: [testbed-node-5] => { 2026-04-13 00:42:55.705145 | orchestrator |  "ceph_osd_devices": { 2026-04-13 00:42:55.705160 | orchestrator |  "sdb": { 2026-04-13 00:42:55.705198 | orchestrator |  "osd_lvm_uuid": "d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000" 2026-04-13 00:42:55.705215 | orchestrator |  }, 2026-04-13 00:42:55.705230 | orchestrator |  "sdc": { 2026-04-13 00:42:55.705246 | orchestrator |  "osd_lvm_uuid": "7331b6c9-9d3b-5dac-8499-53ee0940f196" 2026-04-13 00:42:55.705257 | orchestrator |  } 2026-04-13 00:42:55.705268 | orchestrator |  } 2026-04-13 00:42:55.705278 | orchestrator | } 2026-04-13 00:42:55.705286 | orchestrator | 2026-04-13 00:42:55.705295 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-13 00:42:55.705312 | orchestrator | Monday 13 April 2026 00:42:53 +0000 (0:00:00.135) 0:00:41.949 ********** 2026-04-13 00:42:55.705320 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:55.705329 | orchestrator | 2026-04-13 00:42:55.705338 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-13 00:42:55.705346 | orchestrator | Monday 13 April 2026 00:42:53 +0000 (0:00:00.118) 0:00:42.068 ********** 2026-04-13 00:42:55.705355 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:55.705363 | orchestrator | 2026-04-13 00:42:55.705372 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-13 00:42:55.705380 | orchestrator | Monday 13 April 2026 00:42:54 +0000 (0:00:00.377) 0:00:42.445 ********** 2026-04-13 00:42:55.705389 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:42:55.705397 | orchestrator | 2026-04-13 00:42:55.705405 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-13 00:42:55.705430 | orchestrator | Monday 13 April 2026 00:42:54 +0000 (0:00:00.149) 0:00:42.594 ********** 2026-04-13 00:42:55.705439 | orchestrator | changed: [testbed-node-5] => { 2026-04-13 00:42:55.705447 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-13 00:42:55.705456 | orchestrator |  "ceph_osd_devices": { 2026-04-13 00:42:55.705465 | orchestrator |  "sdb": { 2026-04-13 00:42:55.705474 | orchestrator |  "osd_lvm_uuid": "d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000" 2026-04-13 00:42:55.705482 | orchestrator |  }, 2026-04-13 00:42:55.705490 | orchestrator |  "sdc": { 2026-04-13 00:42:55.705499 | orchestrator |  "osd_lvm_uuid": "7331b6c9-9d3b-5dac-8499-53ee0940f196" 2026-04-13 00:42:55.705512 | orchestrator |  } 2026-04-13 00:42:55.705521 | orchestrator |  }, 2026-04-13 00:42:55.705529 | orchestrator |  "lvm_volumes": [ 2026-04-13 00:42:55.705538 | orchestrator |  { 2026-04-13 00:42:55.705546 | orchestrator |  "data": "osd-block-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000", 2026-04-13 00:42:55.705555 | orchestrator |  "data_vg": "ceph-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000" 2026-04-13 00:42:55.705563 | orchestrator |  }, 2026-04-13 00:42:55.705633 | orchestrator |  { 2026-04-13 00:42:55.705645 | orchestrator |  "data": "osd-block-7331b6c9-9d3b-5dac-8499-53ee0940f196", 2026-04-13 00:42:55.705654 | orchestrator |  "data_vg": "ceph-7331b6c9-9d3b-5dac-8499-53ee0940f196" 2026-04-13 00:42:55.705663 | orchestrator |  } 2026-04-13 00:42:55.705671 | orchestrator |  ] 2026-04-13 00:42:55.705680 | orchestrator |  } 2026-04-13 00:42:55.705688 | orchestrator | } 2026-04-13 00:42:55.705697 | orchestrator | 2026-04-13 00:42:55.705706 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-13 00:42:55.705714 | orchestrator | Monday 13 April 2026 00:42:54 +0000 (0:00:00.226) 0:00:42.821 ********** 2026-04-13 00:42:55.705723 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-13 00:42:55.705732 | orchestrator | 2026-04-13 00:42:55.705740 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:42:55.705749 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-13 00:42:55.705759 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-13 00:42:55.705768 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-13 00:42:55.705777 | orchestrator | 2026-04-13 00:42:55.705785 | orchestrator | 2026-04-13 00:42:55.705794 | orchestrator | 2026-04-13 00:42:55.705802 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:42:55.705811 | orchestrator | Monday 13 April 2026 00:42:55 +0000 (0:00:01.004) 0:00:43.825 ********** 2026-04-13 00:42:55.705820 | orchestrator | =============================================================================== 2026-04-13 00:42:55.705835 | orchestrator | Write configuration file ------------------------------------------------ 4.51s 2026-04-13 00:42:55.705843 | orchestrator | Add known partitions to the list of available block devices ------------- 1.20s 2026-04-13 00:42:55.705852 | orchestrator | Add known links to the list of available block devices ------------------ 1.15s 2026-04-13 00:42:55.705861 | orchestrator | Get initial list of available block devices ----------------------------- 1.14s 2026-04-13 00:42:55.705869 | orchestrator | Add known partitions to the list of available block devices ------------- 1.10s 2026-04-13 00:42:55.705878 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.97s 2026-04-13 00:42:55.705886 | orchestrator | Add known links to the list of available block devices ------------------ 0.89s 2026-04-13 00:42:55.705895 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.82s 2026-04-13 00:42:55.705903 | orchestrator | Add known partitions to the list of available block devices ------------- 0.82s 2026-04-13 00:42:55.705916 | orchestrator | Add known links to the list of available block devices ------------------ 0.80s 2026-04-13 00:42:55.705933 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.72s 2026-04-13 00:42:55.705947 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2026-04-13 00:42:55.705963 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2026-04-13 00:42:55.705989 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2026-04-13 00:42:56.067932 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2026-04-13 00:42:56.068008 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2026-04-13 00:42:56.068014 | orchestrator | Set WAL devices config data --------------------------------------------- 0.67s 2026-04-13 00:42:56.068019 | orchestrator | Print configuration data ------------------------------------------------ 0.67s 2026-04-13 00:42:56.068023 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.66s 2026-04-13 00:42:56.068027 | orchestrator | Print DB devices -------------------------------------------------------- 0.64s 2026-04-13 00:43:17.916854 | orchestrator | 2026-04-13 00:43:17 | INFO  | Task ab09bcff-d6ea-4bba-8d09-e52b8359dbc8 (sync inventory) is running in background. Output coming soon. 2026-04-13 00:43:49.781175 | orchestrator | 2026-04-13 00:43:19 | INFO  | Starting group_vars file reorganization 2026-04-13 00:43:49.781313 | orchestrator | 2026-04-13 00:43:19 | INFO  | Moved 0 file(s) to their respective directories 2026-04-13 00:43:49.781338 | orchestrator | 2026-04-13 00:43:19 | INFO  | Group_vars file reorganization completed 2026-04-13 00:43:49.781357 | orchestrator | 2026-04-13 00:43:22 | INFO  | Starting variable preparation from inventory 2026-04-13 00:43:49.781376 | orchestrator | 2026-04-13 00:43:25 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-04-13 00:43:49.781413 | orchestrator | 2026-04-13 00:43:25 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-04-13 00:43:49.781434 | orchestrator | 2026-04-13 00:43:25 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-04-13 00:43:49.781454 | orchestrator | 2026-04-13 00:43:25 | INFO  | 3 file(s) written, 6 host(s) processed 2026-04-13 00:43:49.781473 | orchestrator | 2026-04-13 00:43:25 | INFO  | Variable preparation completed 2026-04-13 00:43:49.781494 | orchestrator | 2026-04-13 00:43:27 | INFO  | Starting inventory overwrite handling 2026-04-13 00:43:49.781513 | orchestrator | 2026-04-13 00:43:27 | INFO  | Handling group overwrites in 99-overwrite 2026-04-13 00:43:49.781559 | orchestrator | 2026-04-13 00:43:27 | INFO  | Removing group frr:children from 60-generic 2026-04-13 00:43:49.781580 | orchestrator | 2026-04-13 00:43:27 | INFO  | Removing group netbird:children from 50-infrastructure 2026-04-13 00:43:49.781638 | orchestrator | 2026-04-13 00:43:27 | INFO  | Removing group ceph-rgw from 50-ceph 2026-04-13 00:43:49.781659 | orchestrator | 2026-04-13 00:43:27 | INFO  | Removing group ceph-mds from 50-ceph 2026-04-13 00:43:49.781679 | orchestrator | 2026-04-13 00:43:27 | INFO  | Handling group overwrites in 20-roles 2026-04-13 00:43:49.781703 | orchestrator | 2026-04-13 00:43:27 | INFO  | Removing group k3s_node from 50-infrastructure 2026-04-13 00:43:49.781726 | orchestrator | 2026-04-13 00:43:27 | INFO  | Removed 5 group(s) in total 2026-04-13 00:43:49.781749 | orchestrator | 2026-04-13 00:43:27 | INFO  | Inventory overwrite handling completed 2026-04-13 00:43:49.781770 | orchestrator | 2026-04-13 00:43:28 | INFO  | Starting merge of inventory files 2026-04-13 00:43:49.781793 | orchestrator | 2026-04-13 00:43:28 | INFO  | Inventory files merged successfully 2026-04-13 00:43:49.781817 | orchestrator | 2026-04-13 00:43:33 | INFO  | Generating minified hosts file 2026-04-13 00:43:49.781841 | orchestrator | 2026-04-13 00:43:35 | INFO  | Successfully wrote minified hosts file to /inventory.merge/hosts-minified.yml 2026-04-13 00:43:49.781865 | orchestrator | 2026-04-13 00:43:35 | INFO  | Successfully wrote fast inventory to /inventory.merge/fast/hosts.json 2026-04-13 00:43:49.781888 | orchestrator | 2026-04-13 00:43:36 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-04-13 00:43:49.781911 | orchestrator | 2026-04-13 00:43:48 | INFO  | Successfully wrote ClusterShell configuration 2026-04-13 00:43:49.781934 | orchestrator | [master e5212d0] 2026-04-13-00-43 2026-04-13 00:43:49.781958 | orchestrator | 5 files changed, 75 insertions(+), 10 deletions(-) 2026-04-13 00:43:49.781983 | orchestrator | create mode 100644 fast/host_vars/testbed-node-3/ceph-lvm-configuration.yml 2026-04-13 00:43:49.782008 | orchestrator | create mode 100644 fast/host_vars/testbed-node-4/ceph-lvm-configuration.yml 2026-04-13 00:43:49.782110 | orchestrator | create mode 100644 fast/host_vars/testbed-node-5/ceph-lvm-configuration.yml 2026-04-13 00:43:51.379102 | orchestrator | 2026-04-13 00:43:51 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-04-13 00:43:51.449097 | orchestrator | 2026-04-13 00:43:51 | INFO  | Task f5e93561-c62c-4cf7-b655-c193a0e6034f (ceph-create-lvm-devices) was prepared for execution. 2026-04-13 00:43:51.449176 | orchestrator | 2026-04-13 00:43:51 | INFO  | It takes a moment until task f5e93561-c62c-4cf7-b655-c193a0e6034f (ceph-create-lvm-devices) has been started and output is visible here. 2026-04-13 00:44:03.576162 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-13 00:44:03.576268 | orchestrator | 2.16.14 2026-04-13 00:44:03.576284 | orchestrator | 2026-04-13 00:44:03.576298 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-13 00:44:03.576310 | orchestrator | 2026-04-13 00:44:03.576321 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-13 00:44:03.576333 | orchestrator | Monday 13 April 2026 00:43:55 +0000 (0:00:00.276) 0:00:00.276 ********** 2026-04-13 00:44:03.576344 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-13 00:44:03.576355 | orchestrator | 2026-04-13 00:44:03.576366 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-13 00:44:03.576377 | orchestrator | Monday 13 April 2026 00:43:56 +0000 (0:00:00.280) 0:00:00.557 ********** 2026-04-13 00:44:03.576388 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:44:03.576399 | orchestrator | 2026-04-13 00:44:03.576410 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:03.576421 | orchestrator | Monday 13 April 2026 00:43:56 +0000 (0:00:00.228) 0:00:00.785 ********** 2026-04-13 00:44:03.576432 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-13 00:44:03.576461 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-13 00:44:03.576472 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-13 00:44:03.576483 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-13 00:44:03.576493 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-13 00:44:03.576504 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-13 00:44:03.576552 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-13 00:44:03.576601 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-13 00:44:03.576625 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-13 00:44:03.576647 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-13 00:44:03.576658 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-13 00:44:03.576668 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-13 00:44:03.576679 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-13 00:44:03.576691 | orchestrator | 2026-04-13 00:44:03.576703 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:03.576715 | orchestrator | Monday 13 April 2026 00:43:56 +0000 (0:00:00.396) 0:00:01.182 ********** 2026-04-13 00:44:03.576727 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:03.576740 | orchestrator | 2026-04-13 00:44:03.576753 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:03.576765 | orchestrator | Monday 13 April 2026 00:43:57 +0000 (0:00:00.508) 0:00:01.690 ********** 2026-04-13 00:44:03.576776 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:03.576786 | orchestrator | 2026-04-13 00:44:03.576797 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:03.576807 | orchestrator | Monday 13 April 2026 00:43:57 +0000 (0:00:00.194) 0:00:01.885 ********** 2026-04-13 00:44:03.576818 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:03.576829 | orchestrator | 2026-04-13 00:44:03.576839 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:03.576850 | orchestrator | Monday 13 April 2026 00:43:57 +0000 (0:00:00.196) 0:00:02.081 ********** 2026-04-13 00:44:03.576861 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:03.576871 | orchestrator | 2026-04-13 00:44:03.576882 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:03.576893 | orchestrator | Monday 13 April 2026 00:43:57 +0000 (0:00:00.193) 0:00:02.275 ********** 2026-04-13 00:44:03.576903 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:03.576914 | orchestrator | 2026-04-13 00:44:03.576924 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:03.576935 | orchestrator | Monday 13 April 2026 00:43:58 +0000 (0:00:00.192) 0:00:02.467 ********** 2026-04-13 00:44:03.576946 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:03.576956 | orchestrator | 2026-04-13 00:44:03.576967 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:03.576977 | orchestrator | Monday 13 April 2026 00:43:58 +0000 (0:00:00.209) 0:00:02.677 ********** 2026-04-13 00:44:03.576988 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:03.576999 | orchestrator | 2026-04-13 00:44:03.577010 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:03.577021 | orchestrator | Monday 13 April 2026 00:43:58 +0000 (0:00:00.174) 0:00:02.851 ********** 2026-04-13 00:44:03.577031 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:03.577042 | orchestrator | 2026-04-13 00:44:03.577052 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:03.577069 | orchestrator | Monday 13 April 2026 00:43:58 +0000 (0:00:00.225) 0:00:03.076 ********** 2026-04-13 00:44:03.577079 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5) 2026-04-13 00:44:03.577091 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5) 2026-04-13 00:44:03.577102 | orchestrator | 2026-04-13 00:44:03.577113 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:03.577140 | orchestrator | Monday 13 April 2026 00:43:59 +0000 (0:00:00.425) 0:00:03.502 ********** 2026-04-13 00:44:03.577152 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_70b2b286-75d2-4918-b809-b0d3c77d8089) 2026-04-13 00:44:03.577162 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_70b2b286-75d2-4918-b809-b0d3c77d8089) 2026-04-13 00:44:03.577173 | orchestrator | 2026-04-13 00:44:03.577184 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:03.577195 | orchestrator | Monday 13 April 2026 00:43:59 +0000 (0:00:00.430) 0:00:03.933 ********** 2026-04-13 00:44:03.577205 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e58cc4cd-c100-42fd-a854-9a07c2c5ceb1) 2026-04-13 00:44:03.577216 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e58cc4cd-c100-42fd-a854-9a07c2c5ceb1) 2026-04-13 00:44:03.577227 | orchestrator | 2026-04-13 00:44:03.577238 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:03.577248 | orchestrator | Monday 13 April 2026 00:44:00 +0000 (0:00:00.640) 0:00:04.574 ********** 2026-04-13 00:44:03.577259 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1ff476bc-ae0b-4cfd-96fa-c57a101f59cb) 2026-04-13 00:44:03.577270 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1ff476bc-ae0b-4cfd-96fa-c57a101f59cb) 2026-04-13 00:44:03.577280 | orchestrator | 2026-04-13 00:44:03.577291 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:03.577302 | orchestrator | Monday 13 April 2026 00:44:00 +0000 (0:00:00.649) 0:00:05.224 ********** 2026-04-13 00:44:03.577312 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-13 00:44:03.577323 | orchestrator | 2026-04-13 00:44:03.577334 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:03.577344 | orchestrator | Monday 13 April 2026 00:44:01 +0000 (0:00:00.782) 0:00:06.006 ********** 2026-04-13 00:44:03.577355 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-13 00:44:03.577366 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-13 00:44:03.577376 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-13 00:44:03.577387 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-13 00:44:03.577397 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-13 00:44:03.577408 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-13 00:44:03.577419 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-13 00:44:03.577429 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-13 00:44:03.577440 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-13 00:44:03.577450 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-13 00:44:03.577461 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-13 00:44:03.577471 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-13 00:44:03.577489 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-13 00:44:03.577499 | orchestrator | 2026-04-13 00:44:03.577510 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:03.577549 | orchestrator | Monday 13 April 2026 00:44:02 +0000 (0:00:00.433) 0:00:06.441 ********** 2026-04-13 00:44:03.577567 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:03.577586 | orchestrator | 2026-04-13 00:44:03.577606 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:03.577625 | orchestrator | Monday 13 April 2026 00:44:02 +0000 (0:00:00.193) 0:00:06.634 ********** 2026-04-13 00:44:03.577644 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:03.577665 | orchestrator | 2026-04-13 00:44:03.577685 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:03.577704 | orchestrator | Monday 13 April 2026 00:44:02 +0000 (0:00:00.201) 0:00:06.835 ********** 2026-04-13 00:44:03.577718 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:03.577729 | orchestrator | 2026-04-13 00:44:03.577740 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:03.577750 | orchestrator | Monday 13 April 2026 00:44:02 +0000 (0:00:00.202) 0:00:07.038 ********** 2026-04-13 00:44:03.577761 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:03.577771 | orchestrator | 2026-04-13 00:44:03.577782 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:03.577793 | orchestrator | Monday 13 April 2026 00:44:02 +0000 (0:00:00.203) 0:00:07.241 ********** 2026-04-13 00:44:03.577815 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:03.577827 | orchestrator | 2026-04-13 00:44:03.577837 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:03.577848 | orchestrator | Monday 13 April 2026 00:44:03 +0000 (0:00:00.219) 0:00:07.461 ********** 2026-04-13 00:44:03.577859 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:03.577869 | orchestrator | 2026-04-13 00:44:03.577880 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:03.577890 | orchestrator | Monday 13 April 2026 00:44:03 +0000 (0:00:00.200) 0:00:07.662 ********** 2026-04-13 00:44:03.577901 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:03.577912 | orchestrator | 2026-04-13 00:44:03.577930 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:11.709865 | orchestrator | Monday 13 April 2026 00:44:03 +0000 (0:00:00.227) 0:00:07.889 ********** 2026-04-13 00:44:11.709941 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:11.709948 | orchestrator | 2026-04-13 00:44:11.709953 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:11.709957 | orchestrator | Monday 13 April 2026 00:44:03 +0000 (0:00:00.196) 0:00:08.085 ********** 2026-04-13 00:44:11.709962 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-13 00:44:11.709967 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-13 00:44:11.709972 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-13 00:44:11.709975 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-13 00:44:11.709979 | orchestrator | 2026-04-13 00:44:11.709984 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:11.709988 | orchestrator | Monday 13 April 2026 00:44:04 +0000 (0:00:01.052) 0:00:09.138 ********** 2026-04-13 00:44:11.709992 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:11.709995 | orchestrator | 2026-04-13 00:44:11.709999 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:11.710003 | orchestrator | Monday 13 April 2026 00:44:05 +0000 (0:00:00.187) 0:00:09.326 ********** 2026-04-13 00:44:11.710007 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:11.710010 | orchestrator | 2026-04-13 00:44:11.710051 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:11.710055 | orchestrator | Monday 13 April 2026 00:44:05 +0000 (0:00:00.209) 0:00:09.536 ********** 2026-04-13 00:44:11.710075 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:11.710079 | orchestrator | 2026-04-13 00:44:11.710083 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:11.710087 | orchestrator | Monday 13 April 2026 00:44:05 +0000 (0:00:00.200) 0:00:09.737 ********** 2026-04-13 00:44:11.710091 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:11.710094 | orchestrator | 2026-04-13 00:44:11.710102 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-13 00:44:11.710106 | orchestrator | Monday 13 April 2026 00:44:05 +0000 (0:00:00.230) 0:00:09.968 ********** 2026-04-13 00:44:11.710110 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:11.710113 | orchestrator | 2026-04-13 00:44:11.710117 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-13 00:44:11.710121 | orchestrator | Monday 13 April 2026 00:44:05 +0000 (0:00:00.131) 0:00:10.099 ********** 2026-04-13 00:44:11.710125 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a'}}) 2026-04-13 00:44:11.710129 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '100799fe-f0b8-5d68-80c9-d39d0aace7f9'}}) 2026-04-13 00:44:11.710133 | orchestrator | 2026-04-13 00:44:11.710137 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-13 00:44:11.710141 | orchestrator | Monday 13 April 2026 00:44:05 +0000 (0:00:00.210) 0:00:10.309 ********** 2026-04-13 00:44:11.710146 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a', 'data_vg': 'ceph-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a'}) 2026-04-13 00:44:11.710151 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-100799fe-f0b8-5d68-80c9-d39d0aace7f9', 'data_vg': 'ceph-100799fe-f0b8-5d68-80c9-d39d0aace7f9'}) 2026-04-13 00:44:11.710155 | orchestrator | 2026-04-13 00:44:11.710159 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-13 00:44:11.710163 | orchestrator | Monday 13 April 2026 00:44:07 +0000 (0:00:01.979) 0:00:12.289 ********** 2026-04-13 00:44:11.710167 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a', 'data_vg': 'ceph-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a'})  2026-04-13 00:44:11.710172 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-100799fe-f0b8-5d68-80c9-d39d0aace7f9', 'data_vg': 'ceph-100799fe-f0b8-5d68-80c9-d39d0aace7f9'})  2026-04-13 00:44:11.710176 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:11.710179 | orchestrator | 2026-04-13 00:44:11.710183 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-13 00:44:11.710187 | orchestrator | Monday 13 April 2026 00:44:08 +0000 (0:00:00.220) 0:00:12.509 ********** 2026-04-13 00:44:11.710190 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a', 'data_vg': 'ceph-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a'}) 2026-04-13 00:44:11.710194 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-100799fe-f0b8-5d68-80c9-d39d0aace7f9', 'data_vg': 'ceph-100799fe-f0b8-5d68-80c9-d39d0aace7f9'}) 2026-04-13 00:44:11.710198 | orchestrator | 2026-04-13 00:44:11.710202 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-13 00:44:11.710205 | orchestrator | Monday 13 April 2026 00:44:09 +0000 (0:00:01.445) 0:00:13.955 ********** 2026-04-13 00:44:11.710209 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a', 'data_vg': 'ceph-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a'})  2026-04-13 00:44:11.710213 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-100799fe-f0b8-5d68-80c9-d39d0aace7f9', 'data_vg': 'ceph-100799fe-f0b8-5d68-80c9-d39d0aace7f9'})  2026-04-13 00:44:11.710216 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:11.710220 | orchestrator | 2026-04-13 00:44:11.710224 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-13 00:44:11.710232 | orchestrator | Monday 13 April 2026 00:44:09 +0000 (0:00:00.155) 0:00:14.111 ********** 2026-04-13 00:44:11.710247 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:11.710251 | orchestrator | 2026-04-13 00:44:11.710255 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-13 00:44:11.710259 | orchestrator | Monday 13 April 2026 00:44:09 +0000 (0:00:00.136) 0:00:14.248 ********** 2026-04-13 00:44:11.710263 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a', 'data_vg': 'ceph-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a'})  2026-04-13 00:44:11.710266 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-100799fe-f0b8-5d68-80c9-d39d0aace7f9', 'data_vg': 'ceph-100799fe-f0b8-5d68-80c9-d39d0aace7f9'})  2026-04-13 00:44:11.710270 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:11.710274 | orchestrator | 2026-04-13 00:44:11.710278 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-13 00:44:11.710282 | orchestrator | Monday 13 April 2026 00:44:10 +0000 (0:00:00.370) 0:00:14.619 ********** 2026-04-13 00:44:11.710285 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:11.710289 | orchestrator | 2026-04-13 00:44:11.710293 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-13 00:44:11.710296 | orchestrator | Monday 13 April 2026 00:44:10 +0000 (0:00:00.122) 0:00:14.742 ********** 2026-04-13 00:44:11.710300 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a', 'data_vg': 'ceph-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a'})  2026-04-13 00:44:11.710304 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-100799fe-f0b8-5d68-80c9-d39d0aace7f9', 'data_vg': 'ceph-100799fe-f0b8-5d68-80c9-d39d0aace7f9'})  2026-04-13 00:44:11.710308 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:11.710311 | orchestrator | 2026-04-13 00:44:11.710315 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-13 00:44:11.710319 | orchestrator | Monday 13 April 2026 00:44:10 +0000 (0:00:00.149) 0:00:14.891 ********** 2026-04-13 00:44:11.710323 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:11.710326 | orchestrator | 2026-04-13 00:44:11.710330 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-13 00:44:11.710334 | orchestrator | Monday 13 April 2026 00:44:10 +0000 (0:00:00.162) 0:00:15.053 ********** 2026-04-13 00:44:11.710337 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a', 'data_vg': 'ceph-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a'})  2026-04-13 00:44:11.710341 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-100799fe-f0b8-5d68-80c9-d39d0aace7f9', 'data_vg': 'ceph-100799fe-f0b8-5d68-80c9-d39d0aace7f9'})  2026-04-13 00:44:11.710345 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:11.710348 | orchestrator | 2026-04-13 00:44:11.710352 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-13 00:44:11.710356 | orchestrator | Monday 13 April 2026 00:44:10 +0000 (0:00:00.148) 0:00:15.202 ********** 2026-04-13 00:44:11.710360 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:44:11.710363 | orchestrator | 2026-04-13 00:44:11.710367 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-13 00:44:11.710371 | orchestrator | Monday 13 April 2026 00:44:11 +0000 (0:00:00.134) 0:00:15.336 ********** 2026-04-13 00:44:11.710375 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a', 'data_vg': 'ceph-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a'})  2026-04-13 00:44:11.710378 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-100799fe-f0b8-5d68-80c9-d39d0aace7f9', 'data_vg': 'ceph-100799fe-f0b8-5d68-80c9-d39d0aace7f9'})  2026-04-13 00:44:11.710382 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:11.710386 | orchestrator | 2026-04-13 00:44:11.710390 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-13 00:44:11.710397 | orchestrator | Monday 13 April 2026 00:44:11 +0000 (0:00:00.145) 0:00:15.482 ********** 2026-04-13 00:44:11.710401 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a', 'data_vg': 'ceph-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a'})  2026-04-13 00:44:11.710406 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-100799fe-f0b8-5d68-80c9-d39d0aace7f9', 'data_vg': 'ceph-100799fe-f0b8-5d68-80c9-d39d0aace7f9'})  2026-04-13 00:44:11.710410 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:11.710414 | orchestrator | 2026-04-13 00:44:11.710418 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-13 00:44:11.710423 | orchestrator | Monday 13 April 2026 00:44:11 +0000 (0:00:00.160) 0:00:15.642 ********** 2026-04-13 00:44:11.710427 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a', 'data_vg': 'ceph-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a'})  2026-04-13 00:44:11.710432 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-100799fe-f0b8-5d68-80c9-d39d0aace7f9', 'data_vg': 'ceph-100799fe-f0b8-5d68-80c9-d39d0aace7f9'})  2026-04-13 00:44:11.710436 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:11.710440 | orchestrator | 2026-04-13 00:44:11.710444 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-13 00:44:11.710449 | orchestrator | Monday 13 April 2026 00:44:11 +0000 (0:00:00.168) 0:00:15.811 ********** 2026-04-13 00:44:11.710453 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:11.710457 | orchestrator | 2026-04-13 00:44:11.710462 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-13 00:44:11.710469 | orchestrator | Monday 13 April 2026 00:44:11 +0000 (0:00:00.211) 0:00:16.023 ********** 2026-04-13 00:44:17.948008 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:17.948109 | orchestrator | 2026-04-13 00:44:17.948124 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-13 00:44:17.948136 | orchestrator | Monday 13 April 2026 00:44:11 +0000 (0:00:00.122) 0:00:16.146 ********** 2026-04-13 00:44:17.948146 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:17.948156 | orchestrator | 2026-04-13 00:44:17.948166 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-13 00:44:17.948176 | orchestrator | Monday 13 April 2026 00:44:11 +0000 (0:00:00.138) 0:00:16.284 ********** 2026-04-13 00:44:17.948186 | orchestrator | ok: [testbed-node-3] => { 2026-04-13 00:44:17.948196 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-13 00:44:17.948207 | orchestrator | } 2026-04-13 00:44:17.948217 | orchestrator | 2026-04-13 00:44:17.948227 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-13 00:44:17.948237 | orchestrator | Monday 13 April 2026 00:44:12 +0000 (0:00:00.347) 0:00:16.632 ********** 2026-04-13 00:44:17.948246 | orchestrator | ok: [testbed-node-3] => { 2026-04-13 00:44:17.948256 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-13 00:44:17.948266 | orchestrator | } 2026-04-13 00:44:17.948275 | orchestrator | 2026-04-13 00:44:17.948285 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-13 00:44:17.948295 | orchestrator | Monday 13 April 2026 00:44:12 +0000 (0:00:00.140) 0:00:16.772 ********** 2026-04-13 00:44:17.948305 | orchestrator | ok: [testbed-node-3] => { 2026-04-13 00:44:17.948314 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-13 00:44:17.948341 | orchestrator | } 2026-04-13 00:44:17.948352 | orchestrator | 2026-04-13 00:44:17.948361 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-13 00:44:17.948371 | orchestrator | Monday 13 April 2026 00:44:12 +0000 (0:00:00.134) 0:00:16.906 ********** 2026-04-13 00:44:17.948380 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:44:17.948390 | orchestrator | 2026-04-13 00:44:17.948406 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-13 00:44:17.948415 | orchestrator | Monday 13 April 2026 00:44:13 +0000 (0:00:00.633) 0:00:17.540 ********** 2026-04-13 00:44:17.948425 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:44:17.948452 | orchestrator | 2026-04-13 00:44:17.948462 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-13 00:44:17.948472 | orchestrator | Monday 13 April 2026 00:44:13 +0000 (0:00:00.495) 0:00:18.035 ********** 2026-04-13 00:44:17.948481 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:44:17.948491 | orchestrator | 2026-04-13 00:44:17.948501 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-13 00:44:17.948542 | orchestrator | Monday 13 April 2026 00:44:14 +0000 (0:00:00.522) 0:00:18.558 ********** 2026-04-13 00:44:17.948559 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:44:17.948571 | orchestrator | 2026-04-13 00:44:17.948581 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-13 00:44:17.948592 | orchestrator | Monday 13 April 2026 00:44:14 +0000 (0:00:00.146) 0:00:18.704 ********** 2026-04-13 00:44:17.948609 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:17.948626 | orchestrator | 2026-04-13 00:44:17.948642 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-13 00:44:17.948658 | orchestrator | Monday 13 April 2026 00:44:14 +0000 (0:00:00.116) 0:00:18.821 ********** 2026-04-13 00:44:17.948676 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:17.948694 | orchestrator | 2026-04-13 00:44:17.948711 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-13 00:44:17.948727 | orchestrator | Monday 13 April 2026 00:44:14 +0000 (0:00:00.131) 0:00:18.952 ********** 2026-04-13 00:44:17.948739 | orchestrator | ok: [testbed-node-3] => { 2026-04-13 00:44:17.948750 | orchestrator |  "vgs_report": { 2026-04-13 00:44:17.948761 | orchestrator |  "vg": [] 2026-04-13 00:44:17.948772 | orchestrator |  } 2026-04-13 00:44:17.948783 | orchestrator | } 2026-04-13 00:44:17.948794 | orchestrator | 2026-04-13 00:44:17.948805 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-13 00:44:17.948816 | orchestrator | Monday 13 April 2026 00:44:14 +0000 (0:00:00.134) 0:00:19.087 ********** 2026-04-13 00:44:17.948827 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:17.948838 | orchestrator | 2026-04-13 00:44:17.948849 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-13 00:44:17.948859 | orchestrator | Monday 13 April 2026 00:44:14 +0000 (0:00:00.146) 0:00:19.234 ********** 2026-04-13 00:44:17.948871 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:17.948882 | orchestrator | 2026-04-13 00:44:17.948892 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-13 00:44:17.948904 | orchestrator | Monday 13 April 2026 00:44:15 +0000 (0:00:00.139) 0:00:19.373 ********** 2026-04-13 00:44:17.948913 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:17.948923 | orchestrator | 2026-04-13 00:44:17.948932 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-13 00:44:17.948942 | orchestrator | Monday 13 April 2026 00:44:15 +0000 (0:00:00.126) 0:00:19.499 ********** 2026-04-13 00:44:17.948951 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:17.948960 | orchestrator | 2026-04-13 00:44:17.948970 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-13 00:44:17.948979 | orchestrator | Monday 13 April 2026 00:44:15 +0000 (0:00:00.350) 0:00:19.850 ********** 2026-04-13 00:44:17.948989 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:17.948998 | orchestrator | 2026-04-13 00:44:17.949008 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-13 00:44:17.949017 | orchestrator | Monday 13 April 2026 00:44:15 +0000 (0:00:00.142) 0:00:19.992 ********** 2026-04-13 00:44:17.949027 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:17.949036 | orchestrator | 2026-04-13 00:44:17.949046 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-13 00:44:17.949055 | orchestrator | Monday 13 April 2026 00:44:15 +0000 (0:00:00.135) 0:00:20.128 ********** 2026-04-13 00:44:17.949065 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:17.949074 | orchestrator | 2026-04-13 00:44:17.949083 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-13 00:44:17.949103 | orchestrator | Monday 13 April 2026 00:44:15 +0000 (0:00:00.141) 0:00:20.269 ********** 2026-04-13 00:44:17.949130 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:17.949140 | orchestrator | 2026-04-13 00:44:17.949150 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-13 00:44:17.949159 | orchestrator | Monday 13 April 2026 00:44:16 +0000 (0:00:00.142) 0:00:20.413 ********** 2026-04-13 00:44:17.949169 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:17.949178 | orchestrator | 2026-04-13 00:44:17.949188 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-13 00:44:17.949197 | orchestrator | Monday 13 April 2026 00:44:16 +0000 (0:00:00.137) 0:00:20.550 ********** 2026-04-13 00:44:17.949207 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:17.949216 | orchestrator | 2026-04-13 00:44:17.949226 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-13 00:44:17.949236 | orchestrator | Monday 13 April 2026 00:44:16 +0000 (0:00:00.122) 0:00:20.672 ********** 2026-04-13 00:44:17.949245 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:17.949255 | orchestrator | 2026-04-13 00:44:17.949265 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-13 00:44:17.949274 | orchestrator | Monday 13 April 2026 00:44:16 +0000 (0:00:00.129) 0:00:20.802 ********** 2026-04-13 00:44:17.949284 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:17.949293 | orchestrator | 2026-04-13 00:44:17.949303 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-13 00:44:17.949313 | orchestrator | Monday 13 April 2026 00:44:16 +0000 (0:00:00.144) 0:00:20.946 ********** 2026-04-13 00:44:17.949322 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:17.949332 | orchestrator | 2026-04-13 00:44:17.949341 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-13 00:44:17.949351 | orchestrator | Monday 13 April 2026 00:44:16 +0000 (0:00:00.129) 0:00:21.075 ********** 2026-04-13 00:44:17.949360 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:17.949370 | orchestrator | 2026-04-13 00:44:17.949385 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-13 00:44:17.949394 | orchestrator | Monday 13 April 2026 00:44:16 +0000 (0:00:00.119) 0:00:21.195 ********** 2026-04-13 00:44:17.949405 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a', 'data_vg': 'ceph-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a'})  2026-04-13 00:44:17.949416 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-100799fe-f0b8-5d68-80c9-d39d0aace7f9', 'data_vg': 'ceph-100799fe-f0b8-5d68-80c9-d39d0aace7f9'})  2026-04-13 00:44:17.949426 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:17.949435 | orchestrator | 2026-04-13 00:44:17.949445 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-13 00:44:17.949454 | orchestrator | Monday 13 April 2026 00:44:17 +0000 (0:00:00.153) 0:00:21.349 ********** 2026-04-13 00:44:17.949464 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a', 'data_vg': 'ceph-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a'})  2026-04-13 00:44:17.949474 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-100799fe-f0b8-5d68-80c9-d39d0aace7f9', 'data_vg': 'ceph-100799fe-f0b8-5d68-80c9-d39d0aace7f9'})  2026-04-13 00:44:17.949483 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:17.949493 | orchestrator | 2026-04-13 00:44:17.949502 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-13 00:44:17.949589 | orchestrator | Monday 13 April 2026 00:44:17 +0000 (0:00:00.350) 0:00:21.699 ********** 2026-04-13 00:44:17.949600 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a', 'data_vg': 'ceph-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a'})  2026-04-13 00:44:17.949609 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-100799fe-f0b8-5d68-80c9-d39d0aace7f9', 'data_vg': 'ceph-100799fe-f0b8-5d68-80c9-d39d0aace7f9'})  2026-04-13 00:44:17.949626 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:17.949636 | orchestrator | 2026-04-13 00:44:17.949652 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-13 00:44:17.949669 | orchestrator | Monday 13 April 2026 00:44:17 +0000 (0:00:00.177) 0:00:21.877 ********** 2026-04-13 00:44:17.949688 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a', 'data_vg': 'ceph-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a'})  2026-04-13 00:44:17.949706 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-100799fe-f0b8-5d68-80c9-d39d0aace7f9', 'data_vg': 'ceph-100799fe-f0b8-5d68-80c9-d39d0aace7f9'})  2026-04-13 00:44:17.949725 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:17.949744 | orchestrator | 2026-04-13 00:44:17.949761 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-13 00:44:17.949771 | orchestrator | Monday 13 April 2026 00:44:17 +0000 (0:00:00.162) 0:00:22.039 ********** 2026-04-13 00:44:17.949781 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a', 'data_vg': 'ceph-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a'})  2026-04-13 00:44:17.949791 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-100799fe-f0b8-5d68-80c9-d39d0aace7f9', 'data_vg': 'ceph-100799fe-f0b8-5d68-80c9-d39d0aace7f9'})  2026-04-13 00:44:17.949800 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:17.949810 | orchestrator | 2026-04-13 00:44:17.949819 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-13 00:44:17.949829 | orchestrator | Monday 13 April 2026 00:44:17 +0000 (0:00:00.162) 0:00:22.202 ********** 2026-04-13 00:44:17.949846 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a', 'data_vg': 'ceph-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a'})  2026-04-13 00:44:23.336184 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-100799fe-f0b8-5d68-80c9-d39d0aace7f9', 'data_vg': 'ceph-100799fe-f0b8-5d68-80c9-d39d0aace7f9'})  2026-04-13 00:44:23.336318 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:23.336347 | orchestrator | 2026-04-13 00:44:23.336364 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-13 00:44:23.336377 | orchestrator | Monday 13 April 2026 00:44:18 +0000 (0:00:00.168) 0:00:22.370 ********** 2026-04-13 00:44:23.336389 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a', 'data_vg': 'ceph-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a'})  2026-04-13 00:44:23.336401 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-100799fe-f0b8-5d68-80c9-d39d0aace7f9', 'data_vg': 'ceph-100799fe-f0b8-5d68-80c9-d39d0aace7f9'})  2026-04-13 00:44:23.336411 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:23.336422 | orchestrator | 2026-04-13 00:44:23.336434 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-13 00:44:23.336465 | orchestrator | Monday 13 April 2026 00:44:18 +0000 (0:00:00.164) 0:00:22.535 ********** 2026-04-13 00:44:23.336486 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a', 'data_vg': 'ceph-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a'})  2026-04-13 00:44:23.336565 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-100799fe-f0b8-5d68-80c9-d39d0aace7f9', 'data_vg': 'ceph-100799fe-f0b8-5d68-80c9-d39d0aace7f9'})  2026-04-13 00:44:23.336586 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:23.336603 | orchestrator | 2026-04-13 00:44:23.336622 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-13 00:44:23.336641 | orchestrator | Monday 13 April 2026 00:44:18 +0000 (0:00:00.160) 0:00:22.695 ********** 2026-04-13 00:44:23.336659 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:44:23.336677 | orchestrator | 2026-04-13 00:44:23.336692 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-13 00:44:23.336740 | orchestrator | Monday 13 April 2026 00:44:18 +0000 (0:00:00.518) 0:00:23.213 ********** 2026-04-13 00:44:23.336758 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:44:23.336775 | orchestrator | 2026-04-13 00:44:23.336792 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-13 00:44:23.336809 | orchestrator | Monday 13 April 2026 00:44:19 +0000 (0:00:00.512) 0:00:23.726 ********** 2026-04-13 00:44:23.336827 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:44:23.336844 | orchestrator | 2026-04-13 00:44:23.336862 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-13 00:44:23.336880 | orchestrator | Monday 13 April 2026 00:44:19 +0000 (0:00:00.139) 0:00:23.865 ********** 2026-04-13 00:44:23.336898 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-100799fe-f0b8-5d68-80c9-d39d0aace7f9', 'vg_name': 'ceph-100799fe-f0b8-5d68-80c9-d39d0aace7f9'}) 2026-04-13 00:44:23.336917 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a', 'vg_name': 'ceph-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a'}) 2026-04-13 00:44:23.336934 | orchestrator | 2026-04-13 00:44:23.336951 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-13 00:44:23.336970 | orchestrator | Monday 13 April 2026 00:44:19 +0000 (0:00:00.167) 0:00:24.033 ********** 2026-04-13 00:44:23.336989 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a', 'data_vg': 'ceph-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a'})  2026-04-13 00:44:23.337008 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-100799fe-f0b8-5d68-80c9-d39d0aace7f9', 'data_vg': 'ceph-100799fe-f0b8-5d68-80c9-d39d0aace7f9'})  2026-04-13 00:44:23.337027 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:23.337045 | orchestrator | 2026-04-13 00:44:23.337063 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-13 00:44:23.337082 | orchestrator | Monday 13 April 2026 00:44:19 +0000 (0:00:00.159) 0:00:24.192 ********** 2026-04-13 00:44:23.337124 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a', 'data_vg': 'ceph-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a'})  2026-04-13 00:44:23.337144 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-100799fe-f0b8-5d68-80c9-d39d0aace7f9', 'data_vg': 'ceph-100799fe-f0b8-5d68-80c9-d39d0aace7f9'})  2026-04-13 00:44:23.337155 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:23.337166 | orchestrator | 2026-04-13 00:44:23.337177 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-13 00:44:23.337188 | orchestrator | Monday 13 April 2026 00:44:20 +0000 (0:00:00.381) 0:00:24.574 ********** 2026-04-13 00:44:23.337198 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a', 'data_vg': 'ceph-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a'})  2026-04-13 00:44:23.337209 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-100799fe-f0b8-5d68-80c9-d39d0aace7f9', 'data_vg': 'ceph-100799fe-f0b8-5d68-80c9-d39d0aace7f9'})  2026-04-13 00:44:23.337220 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:44:23.337231 | orchestrator | 2026-04-13 00:44:23.337242 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-13 00:44:23.337252 | orchestrator | Monday 13 April 2026 00:44:20 +0000 (0:00:00.150) 0:00:24.725 ********** 2026-04-13 00:44:23.337287 | orchestrator | ok: [testbed-node-3] => { 2026-04-13 00:44:23.337298 | orchestrator |  "lvm_report": { 2026-04-13 00:44:23.337310 | orchestrator |  "lv": [ 2026-04-13 00:44:23.337320 | orchestrator |  { 2026-04-13 00:44:23.337331 | orchestrator |  "lv_name": "osd-block-100799fe-f0b8-5d68-80c9-d39d0aace7f9", 2026-04-13 00:44:23.337343 | orchestrator |  "vg_name": "ceph-100799fe-f0b8-5d68-80c9-d39d0aace7f9" 2026-04-13 00:44:23.337354 | orchestrator |  }, 2026-04-13 00:44:23.337364 | orchestrator |  { 2026-04-13 00:44:23.337391 | orchestrator |  "lv_name": "osd-block-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a", 2026-04-13 00:44:23.337401 | orchestrator |  "vg_name": "ceph-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a" 2026-04-13 00:44:23.337412 | orchestrator |  } 2026-04-13 00:44:23.337423 | orchestrator |  ], 2026-04-13 00:44:23.337434 | orchestrator |  "pv": [ 2026-04-13 00:44:23.337444 | orchestrator |  { 2026-04-13 00:44:23.337455 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-13 00:44:23.337466 | orchestrator |  "vg_name": "ceph-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a" 2026-04-13 00:44:23.337477 | orchestrator |  }, 2026-04-13 00:44:23.337487 | orchestrator |  { 2026-04-13 00:44:23.337498 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-13 00:44:23.337544 | orchestrator |  "vg_name": "ceph-100799fe-f0b8-5d68-80c9-d39d0aace7f9" 2026-04-13 00:44:23.337556 | orchestrator |  } 2026-04-13 00:44:23.337567 | orchestrator |  ] 2026-04-13 00:44:23.337578 | orchestrator |  } 2026-04-13 00:44:23.337589 | orchestrator | } 2026-04-13 00:44:23.337600 | orchestrator | 2026-04-13 00:44:23.337610 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-13 00:44:23.337621 | orchestrator | 2026-04-13 00:44:23.337632 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-13 00:44:23.337648 | orchestrator | Monday 13 April 2026 00:44:20 +0000 (0:00:00.303) 0:00:25.028 ********** 2026-04-13 00:44:23.337659 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-13 00:44:23.337671 | orchestrator | 2026-04-13 00:44:23.337682 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-13 00:44:23.337692 | orchestrator | Monday 13 April 2026 00:44:20 +0000 (0:00:00.245) 0:00:25.274 ********** 2026-04-13 00:44:23.337704 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:44:23.337714 | orchestrator | 2026-04-13 00:44:23.337725 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:23.337736 | orchestrator | Monday 13 April 2026 00:44:21 +0000 (0:00:00.296) 0:00:25.570 ********** 2026-04-13 00:44:23.337747 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-13 00:44:23.337758 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-13 00:44:23.337768 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-13 00:44:23.337779 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-13 00:44:23.337789 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-13 00:44:23.337800 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-13 00:44:23.337811 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-13 00:44:23.337821 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-13 00:44:23.337832 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-13 00:44:23.337843 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-13 00:44:23.337853 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-13 00:44:23.337864 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-13 00:44:23.337875 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-13 00:44:23.337885 | orchestrator | 2026-04-13 00:44:23.337896 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:23.337907 | orchestrator | Monday 13 April 2026 00:44:21 +0000 (0:00:00.429) 0:00:25.999 ********** 2026-04-13 00:44:23.337917 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:23.337928 | orchestrator | 2026-04-13 00:44:23.337947 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:23.337957 | orchestrator | Monday 13 April 2026 00:44:21 +0000 (0:00:00.196) 0:00:26.196 ********** 2026-04-13 00:44:23.337968 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:23.337979 | orchestrator | 2026-04-13 00:44:23.337989 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:23.338000 | orchestrator | Monday 13 April 2026 00:44:22 +0000 (0:00:00.228) 0:00:26.425 ********** 2026-04-13 00:44:23.338010 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:23.338079 | orchestrator | 2026-04-13 00:44:23.338091 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:23.338102 | orchestrator | Monday 13 April 2026 00:44:22 +0000 (0:00:00.182) 0:00:26.608 ********** 2026-04-13 00:44:23.338113 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:23.338124 | orchestrator | 2026-04-13 00:44:23.338135 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:23.338146 | orchestrator | Monday 13 April 2026 00:44:22 +0000 (0:00:00.631) 0:00:27.239 ********** 2026-04-13 00:44:23.338157 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:23.338168 | orchestrator | 2026-04-13 00:44:23.338178 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:23.338189 | orchestrator | Monday 13 April 2026 00:44:23 +0000 (0:00:00.205) 0:00:27.445 ********** 2026-04-13 00:44:23.338200 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:23.338211 | orchestrator | 2026-04-13 00:44:23.338230 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:33.817446 | orchestrator | Monday 13 April 2026 00:44:23 +0000 (0:00:00.205) 0:00:27.650 ********** 2026-04-13 00:44:33.817642 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:33.817672 | orchestrator | 2026-04-13 00:44:33.818403 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:33.818435 | orchestrator | Monday 13 April 2026 00:44:23 +0000 (0:00:00.197) 0:00:27.847 ********** 2026-04-13 00:44:33.818450 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:33.818462 | orchestrator | 2026-04-13 00:44:33.818473 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:33.818484 | orchestrator | Monday 13 April 2026 00:44:23 +0000 (0:00:00.175) 0:00:28.023 ********** 2026-04-13 00:44:33.818523 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191) 2026-04-13 00:44:33.818546 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191) 2026-04-13 00:44:33.818566 | orchestrator | 2026-04-13 00:44:33.818586 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:33.818600 | orchestrator | Monday 13 April 2026 00:44:24 +0000 (0:00:00.420) 0:00:28.443 ********** 2026-04-13 00:44:33.818611 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_28faf471-35fc-493f-ba87-763b98edc4d7) 2026-04-13 00:44:33.818623 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_28faf471-35fc-493f-ba87-763b98edc4d7) 2026-04-13 00:44:33.818634 | orchestrator | 2026-04-13 00:44:33.818665 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:33.818685 | orchestrator | Monday 13 April 2026 00:44:24 +0000 (0:00:00.477) 0:00:28.921 ********** 2026-04-13 00:44:33.818702 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2d6b0ac7-37bd-44a3-98bf-24bee37418a9) 2026-04-13 00:44:33.818720 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2d6b0ac7-37bd-44a3-98bf-24bee37418a9) 2026-04-13 00:44:33.818738 | orchestrator | 2026-04-13 00:44:33.818755 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:33.818774 | orchestrator | Monday 13 April 2026 00:44:25 +0000 (0:00:00.421) 0:00:29.343 ********** 2026-04-13 00:44:33.818794 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_40b67a78-e903-4b7b-9416-2311a13eed69) 2026-04-13 00:44:33.818837 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_40b67a78-e903-4b7b-9416-2311a13eed69) 2026-04-13 00:44:33.818850 | orchestrator | 2026-04-13 00:44:33.818860 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:33.818871 | orchestrator | Monday 13 April 2026 00:44:25 +0000 (0:00:00.418) 0:00:29.761 ********** 2026-04-13 00:44:33.818882 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-13 00:44:33.818893 | orchestrator | 2026-04-13 00:44:33.818904 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:33.818914 | orchestrator | Monday 13 April 2026 00:44:25 +0000 (0:00:00.344) 0:00:30.106 ********** 2026-04-13 00:44:33.818925 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-13 00:44:33.818936 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-13 00:44:33.818947 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-13 00:44:33.818958 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-13 00:44:33.818968 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-13 00:44:33.818979 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-13 00:44:33.818990 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-13 00:44:33.819000 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-13 00:44:33.819012 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-13 00:44:33.819023 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-13 00:44:33.819033 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-13 00:44:33.819044 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-13 00:44:33.819054 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-13 00:44:33.819065 | orchestrator | 2026-04-13 00:44:33.819076 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:33.819087 | orchestrator | Monday 13 April 2026 00:44:26 +0000 (0:00:00.638) 0:00:30.745 ********** 2026-04-13 00:44:33.819098 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:33.819108 | orchestrator | 2026-04-13 00:44:33.819119 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:33.819130 | orchestrator | Monday 13 April 2026 00:44:26 +0000 (0:00:00.197) 0:00:30.942 ********** 2026-04-13 00:44:33.819141 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:33.819151 | orchestrator | 2026-04-13 00:44:33.819162 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:33.819173 | orchestrator | Monday 13 April 2026 00:44:26 +0000 (0:00:00.212) 0:00:31.154 ********** 2026-04-13 00:44:33.819184 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:33.819195 | orchestrator | 2026-04-13 00:44:33.819229 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:33.819240 | orchestrator | Monday 13 April 2026 00:44:27 +0000 (0:00:00.205) 0:00:31.360 ********** 2026-04-13 00:44:33.819251 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:33.819262 | orchestrator | 2026-04-13 00:44:33.819273 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:33.819284 | orchestrator | Monday 13 April 2026 00:44:27 +0000 (0:00:00.196) 0:00:31.557 ********** 2026-04-13 00:44:33.819295 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:33.819305 | orchestrator | 2026-04-13 00:44:33.819316 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:33.819337 | orchestrator | Monday 13 April 2026 00:44:27 +0000 (0:00:00.205) 0:00:31.763 ********** 2026-04-13 00:44:33.819348 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:33.819359 | orchestrator | 2026-04-13 00:44:33.819370 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:33.819381 | orchestrator | Monday 13 April 2026 00:44:27 +0000 (0:00:00.213) 0:00:31.976 ********** 2026-04-13 00:44:33.819392 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:33.819403 | orchestrator | 2026-04-13 00:44:33.819414 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:33.819424 | orchestrator | Monday 13 April 2026 00:44:27 +0000 (0:00:00.187) 0:00:32.164 ********** 2026-04-13 00:44:33.819435 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:33.819446 | orchestrator | 2026-04-13 00:44:33.819457 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:33.819468 | orchestrator | Monday 13 April 2026 00:44:28 +0000 (0:00:00.210) 0:00:32.375 ********** 2026-04-13 00:44:33.819479 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-13 00:44:33.819489 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-13 00:44:33.819578 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-13 00:44:33.819597 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-13 00:44:33.819615 | orchestrator | 2026-04-13 00:44:33.819633 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:33.819649 | orchestrator | Monday 13 April 2026 00:44:28 +0000 (0:00:00.907) 0:00:33.283 ********** 2026-04-13 00:44:33.819667 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:33.819686 | orchestrator | 2026-04-13 00:44:33.819706 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:33.819724 | orchestrator | Monday 13 April 2026 00:44:29 +0000 (0:00:00.219) 0:00:33.502 ********** 2026-04-13 00:44:33.819742 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:33.819753 | orchestrator | 2026-04-13 00:44:33.819764 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:33.819775 | orchestrator | Monday 13 April 2026 00:44:29 +0000 (0:00:00.200) 0:00:33.703 ********** 2026-04-13 00:44:33.819785 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:33.819796 | orchestrator | 2026-04-13 00:44:33.819807 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:33.819818 | orchestrator | Monday 13 April 2026 00:44:30 +0000 (0:00:00.675) 0:00:34.378 ********** 2026-04-13 00:44:33.819828 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:33.819839 | orchestrator | 2026-04-13 00:44:33.819850 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-13 00:44:33.819861 | orchestrator | Monday 13 April 2026 00:44:30 +0000 (0:00:00.225) 0:00:34.604 ********** 2026-04-13 00:44:33.819871 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:33.819882 | orchestrator | 2026-04-13 00:44:33.819893 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-13 00:44:33.819904 | orchestrator | Monday 13 April 2026 00:44:30 +0000 (0:00:00.140) 0:00:34.745 ********** 2026-04-13 00:44:33.819927 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '586ba51f-dba7-5dcd-8710-1804179cab86'}}) 2026-04-13 00:44:33.819947 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '971aa970-5a40-5da7-9620-8f2c789358d2'}}) 2026-04-13 00:44:33.819965 | orchestrator | 2026-04-13 00:44:33.819982 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-13 00:44:33.820000 | orchestrator | Monday 13 April 2026 00:44:30 +0000 (0:00:00.229) 0:00:34.974 ********** 2026-04-13 00:44:33.820019 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-586ba51f-dba7-5dcd-8710-1804179cab86', 'data_vg': 'ceph-586ba51f-dba7-5dcd-8710-1804179cab86'}) 2026-04-13 00:44:33.820041 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-971aa970-5a40-5da7-9620-8f2c789358d2', 'data_vg': 'ceph-971aa970-5a40-5da7-9620-8f2c789358d2'}) 2026-04-13 00:44:33.820072 | orchestrator | 2026-04-13 00:44:33.820092 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-13 00:44:33.820110 | orchestrator | Monday 13 April 2026 00:44:32 +0000 (0:00:01.770) 0:00:36.745 ********** 2026-04-13 00:44:33.820128 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-586ba51f-dba7-5dcd-8710-1804179cab86', 'data_vg': 'ceph-586ba51f-dba7-5dcd-8710-1804179cab86'})  2026-04-13 00:44:33.820149 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-971aa970-5a40-5da7-9620-8f2c789358d2', 'data_vg': 'ceph-971aa970-5a40-5da7-9620-8f2c789358d2'})  2026-04-13 00:44:33.820167 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:33.820185 | orchestrator | 2026-04-13 00:44:33.820202 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-13 00:44:33.820221 | orchestrator | Monday 13 April 2026 00:44:32 +0000 (0:00:00.149) 0:00:36.895 ********** 2026-04-13 00:44:33.820240 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-586ba51f-dba7-5dcd-8710-1804179cab86', 'data_vg': 'ceph-586ba51f-dba7-5dcd-8710-1804179cab86'}) 2026-04-13 00:44:33.820271 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-971aa970-5a40-5da7-9620-8f2c789358d2', 'data_vg': 'ceph-971aa970-5a40-5da7-9620-8f2c789358d2'}) 2026-04-13 00:44:39.407073 | orchestrator | 2026-04-13 00:44:39.407182 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-13 00:44:39.407198 | orchestrator | Monday 13 April 2026 00:44:33 +0000 (0:00:01.300) 0:00:38.196 ********** 2026-04-13 00:44:39.407210 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-586ba51f-dba7-5dcd-8710-1804179cab86', 'data_vg': 'ceph-586ba51f-dba7-5dcd-8710-1804179cab86'})  2026-04-13 00:44:39.407223 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-971aa970-5a40-5da7-9620-8f2c789358d2', 'data_vg': 'ceph-971aa970-5a40-5da7-9620-8f2c789358d2'})  2026-04-13 00:44:39.407234 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:39.407245 | orchestrator | 2026-04-13 00:44:39.407256 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-13 00:44:39.407267 | orchestrator | Monday 13 April 2026 00:44:34 +0000 (0:00:00.149) 0:00:38.345 ********** 2026-04-13 00:44:39.407278 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:39.407288 | orchestrator | 2026-04-13 00:44:39.407299 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-13 00:44:39.407310 | orchestrator | Monday 13 April 2026 00:44:34 +0000 (0:00:00.144) 0:00:38.489 ********** 2026-04-13 00:44:39.407367 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-586ba51f-dba7-5dcd-8710-1804179cab86', 'data_vg': 'ceph-586ba51f-dba7-5dcd-8710-1804179cab86'})  2026-04-13 00:44:39.407379 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-971aa970-5a40-5da7-9620-8f2c789358d2', 'data_vg': 'ceph-971aa970-5a40-5da7-9620-8f2c789358d2'})  2026-04-13 00:44:39.407391 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:39.407401 | orchestrator | 2026-04-13 00:44:39.407412 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-13 00:44:39.407423 | orchestrator | Monday 13 April 2026 00:44:34 +0000 (0:00:00.147) 0:00:38.637 ********** 2026-04-13 00:44:39.407434 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:39.407444 | orchestrator | 2026-04-13 00:44:39.407455 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-13 00:44:39.407466 | orchestrator | Monday 13 April 2026 00:44:34 +0000 (0:00:00.131) 0:00:38.769 ********** 2026-04-13 00:44:39.407476 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-586ba51f-dba7-5dcd-8710-1804179cab86', 'data_vg': 'ceph-586ba51f-dba7-5dcd-8710-1804179cab86'})  2026-04-13 00:44:39.407487 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-971aa970-5a40-5da7-9620-8f2c789358d2', 'data_vg': 'ceph-971aa970-5a40-5da7-9620-8f2c789358d2'})  2026-04-13 00:44:39.407532 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:39.407572 | orchestrator | 2026-04-13 00:44:39.407585 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-13 00:44:39.407596 | orchestrator | Monday 13 April 2026 00:44:34 +0000 (0:00:00.140) 0:00:38.909 ********** 2026-04-13 00:44:39.407609 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:39.407621 | orchestrator | 2026-04-13 00:44:39.407634 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-13 00:44:39.407647 | orchestrator | Monday 13 April 2026 00:44:34 +0000 (0:00:00.392) 0:00:39.301 ********** 2026-04-13 00:44:39.407659 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-586ba51f-dba7-5dcd-8710-1804179cab86', 'data_vg': 'ceph-586ba51f-dba7-5dcd-8710-1804179cab86'})  2026-04-13 00:44:39.407671 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-971aa970-5a40-5da7-9620-8f2c789358d2', 'data_vg': 'ceph-971aa970-5a40-5da7-9620-8f2c789358d2'})  2026-04-13 00:44:39.407684 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:39.407697 | orchestrator | 2026-04-13 00:44:39.407709 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-13 00:44:39.407721 | orchestrator | Monday 13 April 2026 00:44:35 +0000 (0:00:00.157) 0:00:39.459 ********** 2026-04-13 00:44:39.407733 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:44:39.407747 | orchestrator | 2026-04-13 00:44:39.407759 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-13 00:44:39.407770 | orchestrator | Monday 13 April 2026 00:44:35 +0000 (0:00:00.136) 0:00:39.596 ********** 2026-04-13 00:44:39.407782 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-586ba51f-dba7-5dcd-8710-1804179cab86', 'data_vg': 'ceph-586ba51f-dba7-5dcd-8710-1804179cab86'})  2026-04-13 00:44:39.407795 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-971aa970-5a40-5da7-9620-8f2c789358d2', 'data_vg': 'ceph-971aa970-5a40-5da7-9620-8f2c789358d2'})  2026-04-13 00:44:39.407807 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:39.407819 | orchestrator | 2026-04-13 00:44:39.407831 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-13 00:44:39.407844 | orchestrator | Monday 13 April 2026 00:44:35 +0000 (0:00:00.169) 0:00:39.765 ********** 2026-04-13 00:44:39.407855 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-586ba51f-dba7-5dcd-8710-1804179cab86', 'data_vg': 'ceph-586ba51f-dba7-5dcd-8710-1804179cab86'})  2026-04-13 00:44:39.407872 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-971aa970-5a40-5da7-9620-8f2c789358d2', 'data_vg': 'ceph-971aa970-5a40-5da7-9620-8f2c789358d2'})  2026-04-13 00:44:39.407891 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:39.407922 | orchestrator | 2026-04-13 00:44:39.407941 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-13 00:44:39.407982 | orchestrator | Monday 13 April 2026 00:44:35 +0000 (0:00:00.165) 0:00:39.931 ********** 2026-04-13 00:44:39.408001 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-586ba51f-dba7-5dcd-8710-1804179cab86', 'data_vg': 'ceph-586ba51f-dba7-5dcd-8710-1804179cab86'})  2026-04-13 00:44:39.408021 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-971aa970-5a40-5da7-9620-8f2c789358d2', 'data_vg': 'ceph-971aa970-5a40-5da7-9620-8f2c789358d2'})  2026-04-13 00:44:39.408040 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:39.408058 | orchestrator | 2026-04-13 00:44:39.408077 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-13 00:44:39.408092 | orchestrator | Monday 13 April 2026 00:44:35 +0000 (0:00:00.164) 0:00:40.096 ********** 2026-04-13 00:44:39.408102 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:39.408113 | orchestrator | 2026-04-13 00:44:39.408124 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-13 00:44:39.408134 | orchestrator | Monday 13 April 2026 00:44:35 +0000 (0:00:00.139) 0:00:40.235 ********** 2026-04-13 00:44:39.408145 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:39.408169 | orchestrator | 2026-04-13 00:44:39.408179 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-13 00:44:39.408190 | orchestrator | Monday 13 April 2026 00:44:36 +0000 (0:00:00.133) 0:00:40.369 ********** 2026-04-13 00:44:39.408208 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:39.408219 | orchestrator | 2026-04-13 00:44:39.408230 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-13 00:44:39.408241 | orchestrator | Monday 13 April 2026 00:44:36 +0000 (0:00:00.154) 0:00:40.524 ********** 2026-04-13 00:44:39.408251 | orchestrator | ok: [testbed-node-4] => { 2026-04-13 00:44:39.408262 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-13 00:44:39.408273 | orchestrator | } 2026-04-13 00:44:39.408284 | orchestrator | 2026-04-13 00:44:39.408295 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-13 00:44:39.408305 | orchestrator | Monday 13 April 2026 00:44:36 +0000 (0:00:00.129) 0:00:40.653 ********** 2026-04-13 00:44:39.408316 | orchestrator | ok: [testbed-node-4] => { 2026-04-13 00:44:39.408327 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-13 00:44:39.408338 | orchestrator | } 2026-04-13 00:44:39.408348 | orchestrator | 2026-04-13 00:44:39.408359 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-13 00:44:39.408370 | orchestrator | Monday 13 April 2026 00:44:36 +0000 (0:00:00.135) 0:00:40.789 ********** 2026-04-13 00:44:39.408381 | orchestrator | ok: [testbed-node-4] => { 2026-04-13 00:44:39.408392 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-13 00:44:39.408402 | orchestrator | } 2026-04-13 00:44:39.408413 | orchestrator | 2026-04-13 00:44:39.408424 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-13 00:44:39.408434 | orchestrator | Monday 13 April 2026 00:44:36 +0000 (0:00:00.139) 0:00:40.929 ********** 2026-04-13 00:44:39.408445 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:44:39.408456 | orchestrator | 2026-04-13 00:44:39.408467 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-13 00:44:39.408477 | orchestrator | Monday 13 April 2026 00:44:37 +0000 (0:00:00.733) 0:00:41.663 ********** 2026-04-13 00:44:39.408488 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:44:39.408534 | orchestrator | 2026-04-13 00:44:39.408545 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-13 00:44:39.408556 | orchestrator | Monday 13 April 2026 00:44:37 +0000 (0:00:00.509) 0:00:42.172 ********** 2026-04-13 00:44:39.408566 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:44:39.408577 | orchestrator | 2026-04-13 00:44:39.408588 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-13 00:44:39.408598 | orchestrator | Monday 13 April 2026 00:44:38 +0000 (0:00:00.519) 0:00:42.691 ********** 2026-04-13 00:44:39.408609 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:44:39.408619 | orchestrator | 2026-04-13 00:44:39.408630 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-13 00:44:39.408640 | orchestrator | Monday 13 April 2026 00:44:38 +0000 (0:00:00.142) 0:00:42.833 ********** 2026-04-13 00:44:39.408651 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:39.408661 | orchestrator | 2026-04-13 00:44:39.408672 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-13 00:44:39.408683 | orchestrator | Monday 13 April 2026 00:44:38 +0000 (0:00:00.106) 0:00:42.940 ********** 2026-04-13 00:44:39.408693 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:39.408704 | orchestrator | 2026-04-13 00:44:39.408714 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-13 00:44:39.408725 | orchestrator | Monday 13 April 2026 00:44:38 +0000 (0:00:00.125) 0:00:43.065 ********** 2026-04-13 00:44:39.408736 | orchestrator | ok: [testbed-node-4] => { 2026-04-13 00:44:39.408747 | orchestrator |  "vgs_report": { 2026-04-13 00:44:39.408758 | orchestrator |  "vg": [] 2026-04-13 00:44:39.408768 | orchestrator |  } 2026-04-13 00:44:39.408779 | orchestrator | } 2026-04-13 00:44:39.408790 | orchestrator | 2026-04-13 00:44:39.408800 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-13 00:44:39.408819 | orchestrator | Monday 13 April 2026 00:44:38 +0000 (0:00:00.140) 0:00:43.206 ********** 2026-04-13 00:44:39.408829 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:39.408840 | orchestrator | 2026-04-13 00:44:39.408850 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-13 00:44:39.408861 | orchestrator | Monday 13 April 2026 00:44:39 +0000 (0:00:00.128) 0:00:43.335 ********** 2026-04-13 00:44:39.408872 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:39.408882 | orchestrator | 2026-04-13 00:44:39.408893 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-13 00:44:39.408903 | orchestrator | Monday 13 April 2026 00:44:39 +0000 (0:00:00.119) 0:00:43.454 ********** 2026-04-13 00:44:39.408914 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:39.408924 | orchestrator | 2026-04-13 00:44:39.408935 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-13 00:44:39.408945 | orchestrator | Monday 13 April 2026 00:44:39 +0000 (0:00:00.127) 0:00:43.581 ********** 2026-04-13 00:44:39.408956 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:39.408967 | orchestrator | 2026-04-13 00:44:39.408985 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-13 00:44:44.030372 | orchestrator | Monday 13 April 2026 00:44:39 +0000 (0:00:00.138) 0:00:43.720 ********** 2026-04-13 00:44:44.030511 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:44.030535 | orchestrator | 2026-04-13 00:44:44.030588 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-13 00:44:44.030605 | orchestrator | Monday 13 April 2026 00:44:39 +0000 (0:00:00.141) 0:00:43.861 ********** 2026-04-13 00:44:44.030619 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:44.030634 | orchestrator | 2026-04-13 00:44:44.030647 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-13 00:44:44.030661 | orchestrator | Monday 13 April 2026 00:44:39 +0000 (0:00:00.352) 0:00:44.214 ********** 2026-04-13 00:44:44.030675 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:44.030690 | orchestrator | 2026-04-13 00:44:44.030704 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-13 00:44:44.030718 | orchestrator | Monday 13 April 2026 00:44:40 +0000 (0:00:00.136) 0:00:44.350 ********** 2026-04-13 00:44:44.030733 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:44.030748 | orchestrator | 2026-04-13 00:44:44.030761 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-13 00:44:44.030776 | orchestrator | Monday 13 April 2026 00:44:40 +0000 (0:00:00.129) 0:00:44.480 ********** 2026-04-13 00:44:44.030787 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:44.030795 | orchestrator | 2026-04-13 00:44:44.030804 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-13 00:44:44.030813 | orchestrator | Monday 13 April 2026 00:44:40 +0000 (0:00:00.137) 0:00:44.618 ********** 2026-04-13 00:44:44.030821 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:44.030830 | orchestrator | 2026-04-13 00:44:44.030838 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-13 00:44:44.030847 | orchestrator | Monday 13 April 2026 00:44:40 +0000 (0:00:00.122) 0:00:44.741 ********** 2026-04-13 00:44:44.030856 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:44.030864 | orchestrator | 2026-04-13 00:44:44.030873 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-13 00:44:44.030881 | orchestrator | Monday 13 April 2026 00:44:40 +0000 (0:00:00.138) 0:00:44.879 ********** 2026-04-13 00:44:44.030890 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:44.030899 | orchestrator | 2026-04-13 00:44:44.030907 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-13 00:44:44.030917 | orchestrator | Monday 13 April 2026 00:44:40 +0000 (0:00:00.151) 0:00:45.031 ********** 2026-04-13 00:44:44.030927 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:44.030937 | orchestrator | 2026-04-13 00:44:44.030969 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-13 00:44:44.030979 | orchestrator | Monday 13 April 2026 00:44:40 +0000 (0:00:00.126) 0:00:45.158 ********** 2026-04-13 00:44:44.030989 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:44.030999 | orchestrator | 2026-04-13 00:44:44.031009 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-13 00:44:44.031018 | orchestrator | Monday 13 April 2026 00:44:40 +0000 (0:00:00.143) 0:00:45.301 ********** 2026-04-13 00:44:44.031030 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-586ba51f-dba7-5dcd-8710-1804179cab86', 'data_vg': 'ceph-586ba51f-dba7-5dcd-8710-1804179cab86'})  2026-04-13 00:44:44.031042 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-971aa970-5a40-5da7-9620-8f2c789358d2', 'data_vg': 'ceph-971aa970-5a40-5da7-9620-8f2c789358d2'})  2026-04-13 00:44:44.031052 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:44.031062 | orchestrator | 2026-04-13 00:44:44.031072 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-13 00:44:44.031082 | orchestrator | Monday 13 April 2026 00:44:41 +0000 (0:00:00.181) 0:00:45.482 ********** 2026-04-13 00:44:44.031093 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-586ba51f-dba7-5dcd-8710-1804179cab86', 'data_vg': 'ceph-586ba51f-dba7-5dcd-8710-1804179cab86'})  2026-04-13 00:44:44.031103 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-971aa970-5a40-5da7-9620-8f2c789358d2', 'data_vg': 'ceph-971aa970-5a40-5da7-9620-8f2c789358d2'})  2026-04-13 00:44:44.031112 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:44.031123 | orchestrator | 2026-04-13 00:44:44.031133 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-13 00:44:44.031143 | orchestrator | Monday 13 April 2026 00:44:41 +0000 (0:00:00.143) 0:00:45.626 ********** 2026-04-13 00:44:44.031205 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-586ba51f-dba7-5dcd-8710-1804179cab86', 'data_vg': 'ceph-586ba51f-dba7-5dcd-8710-1804179cab86'})  2026-04-13 00:44:44.031224 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-971aa970-5a40-5da7-9620-8f2c789358d2', 'data_vg': 'ceph-971aa970-5a40-5da7-9620-8f2c789358d2'})  2026-04-13 00:44:44.031239 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:44.031254 | orchestrator | 2026-04-13 00:44:44.031268 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-13 00:44:44.031282 | orchestrator | Monday 13 April 2026 00:44:41 +0000 (0:00:00.180) 0:00:45.807 ********** 2026-04-13 00:44:44.031298 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-586ba51f-dba7-5dcd-8710-1804179cab86', 'data_vg': 'ceph-586ba51f-dba7-5dcd-8710-1804179cab86'})  2026-04-13 00:44:44.031314 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-971aa970-5a40-5da7-9620-8f2c789358d2', 'data_vg': 'ceph-971aa970-5a40-5da7-9620-8f2c789358d2'})  2026-04-13 00:44:44.031331 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:44.031346 | orchestrator | 2026-04-13 00:44:44.031378 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-13 00:44:44.031388 | orchestrator | Monday 13 April 2026 00:44:41 +0000 (0:00:00.362) 0:00:46.170 ********** 2026-04-13 00:44:44.031396 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-586ba51f-dba7-5dcd-8710-1804179cab86', 'data_vg': 'ceph-586ba51f-dba7-5dcd-8710-1804179cab86'})  2026-04-13 00:44:44.031405 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-971aa970-5a40-5da7-9620-8f2c789358d2', 'data_vg': 'ceph-971aa970-5a40-5da7-9620-8f2c789358d2'})  2026-04-13 00:44:44.031413 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:44.031422 | orchestrator | 2026-04-13 00:44:44.031430 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-13 00:44:44.031439 | orchestrator | Monday 13 April 2026 00:44:42 +0000 (0:00:00.180) 0:00:46.350 ********** 2026-04-13 00:44:44.031447 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-586ba51f-dba7-5dcd-8710-1804179cab86', 'data_vg': 'ceph-586ba51f-dba7-5dcd-8710-1804179cab86'})  2026-04-13 00:44:44.031469 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-971aa970-5a40-5da7-9620-8f2c789358d2', 'data_vg': 'ceph-971aa970-5a40-5da7-9620-8f2c789358d2'})  2026-04-13 00:44:44.031478 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:44.031511 | orchestrator | 2026-04-13 00:44:44.031524 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-13 00:44:44.031533 | orchestrator | Monday 13 April 2026 00:44:42 +0000 (0:00:00.145) 0:00:46.496 ********** 2026-04-13 00:44:44.031541 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-586ba51f-dba7-5dcd-8710-1804179cab86', 'data_vg': 'ceph-586ba51f-dba7-5dcd-8710-1804179cab86'})  2026-04-13 00:44:44.031550 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-971aa970-5a40-5da7-9620-8f2c789358d2', 'data_vg': 'ceph-971aa970-5a40-5da7-9620-8f2c789358d2'})  2026-04-13 00:44:44.031559 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:44.031567 | orchestrator | 2026-04-13 00:44:44.031576 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-13 00:44:44.031584 | orchestrator | Monday 13 April 2026 00:44:42 +0000 (0:00:00.146) 0:00:46.642 ********** 2026-04-13 00:44:44.031593 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-586ba51f-dba7-5dcd-8710-1804179cab86', 'data_vg': 'ceph-586ba51f-dba7-5dcd-8710-1804179cab86'})  2026-04-13 00:44:44.031602 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-971aa970-5a40-5da7-9620-8f2c789358d2', 'data_vg': 'ceph-971aa970-5a40-5da7-9620-8f2c789358d2'})  2026-04-13 00:44:44.031610 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:44.031619 | orchestrator | 2026-04-13 00:44:44.031627 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-13 00:44:44.031636 | orchestrator | Monday 13 April 2026 00:44:42 +0000 (0:00:00.145) 0:00:46.787 ********** 2026-04-13 00:44:44.031645 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:44:44.031654 | orchestrator | 2026-04-13 00:44:44.031662 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-13 00:44:44.031671 | orchestrator | Monday 13 April 2026 00:44:43 +0000 (0:00:00.541) 0:00:47.329 ********** 2026-04-13 00:44:44.031679 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:44:44.031688 | orchestrator | 2026-04-13 00:44:44.031696 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-13 00:44:44.031705 | orchestrator | Monday 13 April 2026 00:44:43 +0000 (0:00:00.501) 0:00:47.831 ********** 2026-04-13 00:44:44.031714 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:44:44.031722 | orchestrator | 2026-04-13 00:44:44.031731 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-13 00:44:44.031739 | orchestrator | Monday 13 April 2026 00:44:43 +0000 (0:00:00.139) 0:00:47.970 ********** 2026-04-13 00:44:44.031748 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-586ba51f-dba7-5dcd-8710-1804179cab86', 'vg_name': 'ceph-586ba51f-dba7-5dcd-8710-1804179cab86'}) 2026-04-13 00:44:44.031758 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-971aa970-5a40-5da7-9620-8f2c789358d2', 'vg_name': 'ceph-971aa970-5a40-5da7-9620-8f2c789358d2'}) 2026-04-13 00:44:44.031767 | orchestrator | 2026-04-13 00:44:44.031775 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-13 00:44:44.031784 | orchestrator | Monday 13 April 2026 00:44:43 +0000 (0:00:00.163) 0:00:48.133 ********** 2026-04-13 00:44:44.031793 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-586ba51f-dba7-5dcd-8710-1804179cab86', 'data_vg': 'ceph-586ba51f-dba7-5dcd-8710-1804179cab86'})  2026-04-13 00:44:44.031801 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-971aa970-5a40-5da7-9620-8f2c789358d2', 'data_vg': 'ceph-971aa970-5a40-5da7-9620-8f2c789358d2'})  2026-04-13 00:44:44.031810 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:44.031819 | orchestrator | 2026-04-13 00:44:44.031834 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-13 00:44:44.031843 | orchestrator | Monday 13 April 2026 00:44:43 +0000 (0:00:00.138) 0:00:48.272 ********** 2026-04-13 00:44:44.031851 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-586ba51f-dba7-5dcd-8710-1804179cab86', 'data_vg': 'ceph-586ba51f-dba7-5dcd-8710-1804179cab86'})  2026-04-13 00:44:44.031866 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-971aa970-5a40-5da7-9620-8f2c789358d2', 'data_vg': 'ceph-971aa970-5a40-5da7-9620-8f2c789358d2'})  2026-04-13 00:44:50.320160 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:50.320259 | orchestrator | 2026-04-13 00:44:50.320270 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-13 00:44:50.320279 | orchestrator | Monday 13 April 2026 00:44:44 +0000 (0:00:00.146) 0:00:48.418 ********** 2026-04-13 00:44:50.320287 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-586ba51f-dba7-5dcd-8710-1804179cab86', 'data_vg': 'ceph-586ba51f-dba7-5dcd-8710-1804179cab86'})  2026-04-13 00:44:50.320296 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-971aa970-5a40-5da7-9620-8f2c789358d2', 'data_vg': 'ceph-971aa970-5a40-5da7-9620-8f2c789358d2'})  2026-04-13 00:44:50.320303 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:44:50.320311 | orchestrator | 2026-04-13 00:44:50.320318 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-13 00:44:50.320325 | orchestrator | Monday 13 April 2026 00:44:44 +0000 (0:00:00.178) 0:00:48.597 ********** 2026-04-13 00:44:50.320333 | orchestrator | ok: [testbed-node-4] => { 2026-04-13 00:44:50.320340 | orchestrator |  "lvm_report": { 2026-04-13 00:44:50.320349 | orchestrator |  "lv": [ 2026-04-13 00:44:50.320356 | orchestrator |  { 2026-04-13 00:44:50.320376 | orchestrator |  "lv_name": "osd-block-586ba51f-dba7-5dcd-8710-1804179cab86", 2026-04-13 00:44:50.320385 | orchestrator |  "vg_name": "ceph-586ba51f-dba7-5dcd-8710-1804179cab86" 2026-04-13 00:44:50.320392 | orchestrator |  }, 2026-04-13 00:44:50.320399 | orchestrator |  { 2026-04-13 00:44:50.320407 | orchestrator |  "lv_name": "osd-block-971aa970-5a40-5da7-9620-8f2c789358d2", 2026-04-13 00:44:50.320414 | orchestrator |  "vg_name": "ceph-971aa970-5a40-5da7-9620-8f2c789358d2" 2026-04-13 00:44:50.320421 | orchestrator |  } 2026-04-13 00:44:50.320428 | orchestrator |  ], 2026-04-13 00:44:50.320435 | orchestrator |  "pv": [ 2026-04-13 00:44:50.320442 | orchestrator |  { 2026-04-13 00:44:50.320449 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-13 00:44:50.320456 | orchestrator |  "vg_name": "ceph-586ba51f-dba7-5dcd-8710-1804179cab86" 2026-04-13 00:44:50.320464 | orchestrator |  }, 2026-04-13 00:44:50.320471 | orchestrator |  { 2026-04-13 00:44:50.320478 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-13 00:44:50.320559 | orchestrator |  "vg_name": "ceph-971aa970-5a40-5da7-9620-8f2c789358d2" 2026-04-13 00:44:50.320567 | orchestrator |  } 2026-04-13 00:44:50.320575 | orchestrator |  ] 2026-04-13 00:44:50.320583 | orchestrator |  } 2026-04-13 00:44:50.320590 | orchestrator | } 2026-04-13 00:44:50.320598 | orchestrator | 2026-04-13 00:44:50.320605 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-13 00:44:50.320612 | orchestrator | 2026-04-13 00:44:50.320620 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-13 00:44:50.320627 | orchestrator | Monday 13 April 2026 00:44:44 +0000 (0:00:00.517) 0:00:49.114 ********** 2026-04-13 00:44:50.320634 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-13 00:44:50.320641 | orchestrator | 2026-04-13 00:44:50.320649 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-13 00:44:50.320656 | orchestrator | Monday 13 April 2026 00:44:45 +0000 (0:00:00.248) 0:00:49.362 ********** 2026-04-13 00:44:50.320663 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:44:50.320685 | orchestrator | 2026-04-13 00:44:50.320693 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:50.320700 | orchestrator | Monday 13 April 2026 00:44:45 +0000 (0:00:00.217) 0:00:49.579 ********** 2026-04-13 00:44:50.320707 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-13 00:44:50.320714 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-13 00:44:50.320721 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-13 00:44:50.320728 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-13 00:44:50.320739 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-13 00:44:50.320746 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-13 00:44:50.320753 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-13 00:44:50.320760 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-13 00:44:50.320767 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-13 00:44:50.320775 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-13 00:44:50.320782 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-13 00:44:50.320789 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-13 00:44:50.320796 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-13 00:44:50.320803 | orchestrator | 2026-04-13 00:44:50.320810 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:50.320817 | orchestrator | Monday 13 April 2026 00:44:45 +0000 (0:00:00.473) 0:00:50.052 ********** 2026-04-13 00:44:50.320824 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:44:50.320831 | orchestrator | 2026-04-13 00:44:50.320838 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:50.320846 | orchestrator | Monday 13 April 2026 00:44:45 +0000 (0:00:00.209) 0:00:50.262 ********** 2026-04-13 00:44:50.320853 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:44:50.320860 | orchestrator | 2026-04-13 00:44:50.320867 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:50.320889 | orchestrator | Monday 13 April 2026 00:44:46 +0000 (0:00:00.203) 0:00:50.465 ********** 2026-04-13 00:44:50.320897 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:44:50.320904 | orchestrator | 2026-04-13 00:44:50.320911 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:50.320918 | orchestrator | Monday 13 April 2026 00:44:46 +0000 (0:00:00.197) 0:00:50.663 ********** 2026-04-13 00:44:50.320925 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:44:50.320932 | orchestrator | 2026-04-13 00:44:50.320940 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:50.320947 | orchestrator | Monday 13 April 2026 00:44:46 +0000 (0:00:00.191) 0:00:50.855 ********** 2026-04-13 00:44:50.320954 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:44:50.320961 | orchestrator | 2026-04-13 00:44:50.320968 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:50.320975 | orchestrator | Monday 13 April 2026 00:44:46 +0000 (0:00:00.206) 0:00:51.061 ********** 2026-04-13 00:44:50.320982 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:44:50.320989 | orchestrator | 2026-04-13 00:44:50.320996 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:50.321003 | orchestrator | Monday 13 April 2026 00:44:47 +0000 (0:00:00.644) 0:00:51.706 ********** 2026-04-13 00:44:50.321015 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:44:50.321023 | orchestrator | 2026-04-13 00:44:50.321036 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:50.321043 | orchestrator | Monday 13 April 2026 00:44:47 +0000 (0:00:00.218) 0:00:51.925 ********** 2026-04-13 00:44:50.321050 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:44:50.321057 | orchestrator | 2026-04-13 00:44:50.321064 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:50.321072 | orchestrator | Monday 13 April 2026 00:44:47 +0000 (0:00:00.188) 0:00:52.113 ********** 2026-04-13 00:44:50.321079 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b) 2026-04-13 00:44:50.321086 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b) 2026-04-13 00:44:50.321093 | orchestrator | 2026-04-13 00:44:50.321101 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:50.321108 | orchestrator | Monday 13 April 2026 00:44:48 +0000 (0:00:00.439) 0:00:52.552 ********** 2026-04-13 00:44:50.321115 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5e205b26-74df-4a0d-a6b0-fd65d84e1df5) 2026-04-13 00:44:50.321122 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5e205b26-74df-4a0d-a6b0-fd65d84e1df5) 2026-04-13 00:44:50.321129 | orchestrator | 2026-04-13 00:44:50.321136 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:50.321143 | orchestrator | Monday 13 April 2026 00:44:48 +0000 (0:00:00.439) 0:00:52.992 ********** 2026-04-13 00:44:50.321150 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3fbef31d-44a1-4ae9-9145-86033c094687) 2026-04-13 00:44:50.321157 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3fbef31d-44a1-4ae9-9145-86033c094687) 2026-04-13 00:44:50.321165 | orchestrator | 2026-04-13 00:44:50.321172 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:50.321179 | orchestrator | Monday 13 April 2026 00:44:49 +0000 (0:00:00.458) 0:00:53.451 ********** 2026-04-13 00:44:50.321186 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d506fd3a-4f98-4a08-a2bf-c3638f88932b) 2026-04-13 00:44:50.321193 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d506fd3a-4f98-4a08-a2bf-c3638f88932b) 2026-04-13 00:44:50.321200 | orchestrator | 2026-04-13 00:44:50.321207 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-13 00:44:50.321214 | orchestrator | Monday 13 April 2026 00:44:49 +0000 (0:00:00.465) 0:00:53.917 ********** 2026-04-13 00:44:50.321221 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-13 00:44:50.321228 | orchestrator | 2026-04-13 00:44:50.321235 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:50.321243 | orchestrator | Monday 13 April 2026 00:44:49 +0000 (0:00:00.371) 0:00:54.288 ********** 2026-04-13 00:44:50.321250 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-13 00:44:50.321257 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-13 00:44:50.321264 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-13 00:44:50.321271 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-13 00:44:50.321278 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-13 00:44:50.321285 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-13 00:44:50.321292 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-13 00:44:50.321299 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-13 00:44:50.321306 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-13 00:44:50.321318 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-13 00:44:50.321325 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-13 00:44:50.321337 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-13 00:44:59.130087 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-13 00:44:59.130209 | orchestrator | 2026-04-13 00:44:59.130234 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:59.130254 | orchestrator | Monday 13 April 2026 00:44:50 +0000 (0:00:00.431) 0:00:54.719 ********** 2026-04-13 00:44:59.130274 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:44:59.130294 | orchestrator | 2026-04-13 00:44:59.130314 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:59.130334 | orchestrator | Monday 13 April 2026 00:44:50 +0000 (0:00:00.220) 0:00:54.939 ********** 2026-04-13 00:44:59.130350 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:44:59.130361 | orchestrator | 2026-04-13 00:44:59.130371 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:59.130382 | orchestrator | Monday 13 April 2026 00:44:50 +0000 (0:00:00.208) 0:00:55.148 ********** 2026-04-13 00:44:59.130394 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:44:59.130405 | orchestrator | 2026-04-13 00:44:59.130415 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:59.130426 | orchestrator | Monday 13 April 2026 00:44:51 +0000 (0:00:00.719) 0:00:55.867 ********** 2026-04-13 00:44:59.130437 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:44:59.130448 | orchestrator | 2026-04-13 00:44:59.130459 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:59.130470 | orchestrator | Monday 13 April 2026 00:44:51 +0000 (0:00:00.205) 0:00:56.073 ********** 2026-04-13 00:44:59.130510 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:44:59.130521 | orchestrator | 2026-04-13 00:44:59.130532 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:59.130543 | orchestrator | Monday 13 April 2026 00:44:51 +0000 (0:00:00.204) 0:00:56.277 ********** 2026-04-13 00:44:59.130556 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:44:59.130568 | orchestrator | 2026-04-13 00:44:59.130581 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:59.130593 | orchestrator | Monday 13 April 2026 00:44:52 +0000 (0:00:00.179) 0:00:56.457 ********** 2026-04-13 00:44:59.130606 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:44:59.130618 | orchestrator | 2026-04-13 00:44:59.130631 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:59.130663 | orchestrator | Monday 13 April 2026 00:44:52 +0000 (0:00:00.201) 0:00:56.659 ********** 2026-04-13 00:44:59.130675 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:44:59.130688 | orchestrator | 2026-04-13 00:44:59.130700 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:59.130712 | orchestrator | Monday 13 April 2026 00:44:52 +0000 (0:00:00.206) 0:00:56.866 ********** 2026-04-13 00:44:59.130725 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-13 00:44:59.130738 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-13 00:44:59.130750 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-13 00:44:59.130762 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-13 00:44:59.130774 | orchestrator | 2026-04-13 00:44:59.130786 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:59.130799 | orchestrator | Monday 13 April 2026 00:44:53 +0000 (0:00:00.652) 0:00:57.518 ********** 2026-04-13 00:44:59.130811 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:44:59.130823 | orchestrator | 2026-04-13 00:44:59.130835 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:59.130848 | orchestrator | Monday 13 April 2026 00:44:53 +0000 (0:00:00.196) 0:00:57.714 ********** 2026-04-13 00:44:59.130882 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:44:59.130895 | orchestrator | 2026-04-13 00:44:59.130908 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:59.130919 | orchestrator | Monday 13 April 2026 00:44:53 +0000 (0:00:00.194) 0:00:57.909 ********** 2026-04-13 00:44:59.130930 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:44:59.130941 | orchestrator | 2026-04-13 00:44:59.130952 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-13 00:44:59.130962 | orchestrator | Monday 13 April 2026 00:44:53 +0000 (0:00:00.199) 0:00:58.109 ********** 2026-04-13 00:44:59.130973 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:44:59.130984 | orchestrator | 2026-04-13 00:44:59.130995 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-13 00:44:59.131006 | orchestrator | Monday 13 April 2026 00:44:54 +0000 (0:00:00.216) 0:00:58.325 ********** 2026-04-13 00:44:59.131016 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:44:59.131027 | orchestrator | 2026-04-13 00:44:59.131038 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-13 00:44:59.131049 | orchestrator | Monday 13 April 2026 00:44:54 +0000 (0:00:00.134) 0:00:58.460 ********** 2026-04-13 00:44:59.131059 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd9f8332f-65b5-5ad5-8d64-0b4e5e7cc000'}}) 2026-04-13 00:44:59.131071 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7331b6c9-9d3b-5dac-8499-53ee0940f196'}}) 2026-04-13 00:44:59.131081 | orchestrator | 2026-04-13 00:44:59.131092 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-13 00:44:59.131103 | orchestrator | Monday 13 April 2026 00:44:54 +0000 (0:00:00.409) 0:00:58.869 ********** 2026-04-13 00:44:59.131116 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000', 'data_vg': 'ceph-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000'}) 2026-04-13 00:44:59.131129 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7331b6c9-9d3b-5dac-8499-53ee0940f196', 'data_vg': 'ceph-7331b6c9-9d3b-5dac-8499-53ee0940f196'}) 2026-04-13 00:44:59.131140 | orchestrator | 2026-04-13 00:44:59.131151 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-13 00:44:59.131183 | orchestrator | Monday 13 April 2026 00:44:56 +0000 (0:00:01.790) 0:01:00.660 ********** 2026-04-13 00:44:59.131194 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000', 'data_vg': 'ceph-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000'})  2026-04-13 00:44:59.131207 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7331b6c9-9d3b-5dac-8499-53ee0940f196', 'data_vg': 'ceph-7331b6c9-9d3b-5dac-8499-53ee0940f196'})  2026-04-13 00:44:59.131218 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:44:59.131229 | orchestrator | 2026-04-13 00:44:59.131239 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-13 00:44:59.131250 | orchestrator | Monday 13 April 2026 00:44:56 +0000 (0:00:00.176) 0:01:00.836 ********** 2026-04-13 00:44:59.131261 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000', 'data_vg': 'ceph-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000'}) 2026-04-13 00:44:59.131280 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7331b6c9-9d3b-5dac-8499-53ee0940f196', 'data_vg': 'ceph-7331b6c9-9d3b-5dac-8499-53ee0940f196'}) 2026-04-13 00:44:59.131291 | orchestrator | 2026-04-13 00:44:59.131302 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-13 00:44:59.131312 | orchestrator | Monday 13 April 2026 00:44:57 +0000 (0:00:01.322) 0:01:02.159 ********** 2026-04-13 00:44:59.131346 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000', 'data_vg': 'ceph-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000'})  2026-04-13 00:44:59.131358 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7331b6c9-9d3b-5dac-8499-53ee0940f196', 'data_vg': 'ceph-7331b6c9-9d3b-5dac-8499-53ee0940f196'})  2026-04-13 00:44:59.131390 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:44:59.131402 | orchestrator | 2026-04-13 00:44:59.131413 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-13 00:44:59.131424 | orchestrator | Monday 13 April 2026 00:44:58 +0000 (0:00:00.163) 0:01:02.322 ********** 2026-04-13 00:44:59.131435 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:44:59.131457 | orchestrator | 2026-04-13 00:44:59.131468 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-13 00:44:59.131524 | orchestrator | Monday 13 April 2026 00:44:58 +0000 (0:00:00.159) 0:01:02.481 ********** 2026-04-13 00:44:59.131535 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000', 'data_vg': 'ceph-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000'})  2026-04-13 00:44:59.131547 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7331b6c9-9d3b-5dac-8499-53ee0940f196', 'data_vg': 'ceph-7331b6c9-9d3b-5dac-8499-53ee0940f196'})  2026-04-13 00:44:59.131558 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:44:59.131569 | orchestrator | 2026-04-13 00:44:59.131579 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-13 00:44:59.131590 | orchestrator | Monday 13 April 2026 00:44:58 +0000 (0:00:00.157) 0:01:02.638 ********** 2026-04-13 00:44:59.131601 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:44:59.131612 | orchestrator | 2026-04-13 00:44:59.131623 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-13 00:44:59.131634 | orchestrator | Monday 13 April 2026 00:44:58 +0000 (0:00:00.174) 0:01:02.813 ********** 2026-04-13 00:44:59.131645 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000', 'data_vg': 'ceph-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000'})  2026-04-13 00:44:59.131656 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7331b6c9-9d3b-5dac-8499-53ee0940f196', 'data_vg': 'ceph-7331b6c9-9d3b-5dac-8499-53ee0940f196'})  2026-04-13 00:44:59.131667 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:44:59.131678 | orchestrator | 2026-04-13 00:44:59.131689 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-13 00:44:59.131699 | orchestrator | Monday 13 April 2026 00:44:58 +0000 (0:00:00.160) 0:01:02.974 ********** 2026-04-13 00:44:59.131710 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:44:59.131721 | orchestrator | 2026-04-13 00:44:59.131732 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-13 00:44:59.131743 | orchestrator | Monday 13 April 2026 00:44:58 +0000 (0:00:00.136) 0:01:03.110 ********** 2026-04-13 00:44:59.131754 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000', 'data_vg': 'ceph-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000'})  2026-04-13 00:44:59.131765 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7331b6c9-9d3b-5dac-8499-53ee0940f196', 'data_vg': 'ceph-7331b6c9-9d3b-5dac-8499-53ee0940f196'})  2026-04-13 00:44:59.131776 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:44:59.131786 | orchestrator | 2026-04-13 00:44:59.131797 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-13 00:44:59.131808 | orchestrator | Monday 13 April 2026 00:44:58 +0000 (0:00:00.140) 0:01:03.250 ********** 2026-04-13 00:44:59.131819 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:44:59.131830 | orchestrator | 2026-04-13 00:44:59.131841 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-13 00:44:59.131852 | orchestrator | Monday 13 April 2026 00:44:59 +0000 (0:00:00.125) 0:01:03.376 ********** 2026-04-13 00:44:59.131872 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000', 'data_vg': 'ceph-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000'})  2026-04-13 00:45:05.528597 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7331b6c9-9d3b-5dac-8499-53ee0940f196', 'data_vg': 'ceph-7331b6c9-9d3b-5dac-8499-53ee0940f196'})  2026-04-13 00:45:05.528695 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:05.528704 | orchestrator | 2026-04-13 00:45:05.528712 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-13 00:45:05.528719 | orchestrator | Monday 13 April 2026 00:44:59 +0000 (0:00:00.372) 0:01:03.748 ********** 2026-04-13 00:45:05.528724 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000', 'data_vg': 'ceph-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000'})  2026-04-13 00:45:05.528730 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7331b6c9-9d3b-5dac-8499-53ee0940f196', 'data_vg': 'ceph-7331b6c9-9d3b-5dac-8499-53ee0940f196'})  2026-04-13 00:45:05.528736 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:05.528741 | orchestrator | 2026-04-13 00:45:05.528757 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-13 00:45:05.528763 | orchestrator | Monday 13 April 2026 00:44:59 +0000 (0:00:00.163) 0:01:03.912 ********** 2026-04-13 00:45:05.528769 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000', 'data_vg': 'ceph-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000'})  2026-04-13 00:45:05.528774 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7331b6c9-9d3b-5dac-8499-53ee0940f196', 'data_vg': 'ceph-7331b6c9-9d3b-5dac-8499-53ee0940f196'})  2026-04-13 00:45:05.528780 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:05.528785 | orchestrator | 2026-04-13 00:45:05.528791 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-13 00:45:05.528796 | orchestrator | Monday 13 April 2026 00:44:59 +0000 (0:00:00.160) 0:01:04.072 ********** 2026-04-13 00:45:05.528802 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:05.528807 | orchestrator | 2026-04-13 00:45:05.528813 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-13 00:45:05.528818 | orchestrator | Monday 13 April 2026 00:44:59 +0000 (0:00:00.143) 0:01:04.216 ********** 2026-04-13 00:45:05.528824 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:05.528829 | orchestrator | 2026-04-13 00:45:05.528834 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-13 00:45:05.528840 | orchestrator | Monday 13 April 2026 00:45:00 +0000 (0:00:00.146) 0:01:04.362 ********** 2026-04-13 00:45:05.528845 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:05.528851 | orchestrator | 2026-04-13 00:45:05.528857 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-13 00:45:05.528862 | orchestrator | Monday 13 April 2026 00:45:00 +0000 (0:00:00.138) 0:01:04.501 ********** 2026-04-13 00:45:05.528868 | orchestrator | ok: [testbed-node-5] => { 2026-04-13 00:45:05.528874 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-13 00:45:05.528879 | orchestrator | } 2026-04-13 00:45:05.528885 | orchestrator | 2026-04-13 00:45:05.528891 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-13 00:45:05.528896 | orchestrator | Monday 13 April 2026 00:45:00 +0000 (0:00:00.145) 0:01:04.647 ********** 2026-04-13 00:45:05.528902 | orchestrator | ok: [testbed-node-5] => { 2026-04-13 00:45:05.528907 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-13 00:45:05.528912 | orchestrator | } 2026-04-13 00:45:05.528918 | orchestrator | 2026-04-13 00:45:05.528923 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-13 00:45:05.528929 | orchestrator | Monday 13 April 2026 00:45:00 +0000 (0:00:00.145) 0:01:04.792 ********** 2026-04-13 00:45:05.528934 | orchestrator | ok: [testbed-node-5] => { 2026-04-13 00:45:05.528940 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-13 00:45:05.528945 | orchestrator | } 2026-04-13 00:45:05.528951 | orchestrator | 2026-04-13 00:45:05.528956 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-13 00:45:05.528962 | orchestrator | Monday 13 April 2026 00:45:00 +0000 (0:00:00.146) 0:01:04.939 ********** 2026-04-13 00:45:05.528974 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:45:05.528980 | orchestrator | 2026-04-13 00:45:05.528985 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-13 00:45:05.528991 | orchestrator | Monday 13 April 2026 00:45:01 +0000 (0:00:00.528) 0:01:05.468 ********** 2026-04-13 00:45:05.528996 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:45:05.529001 | orchestrator | 2026-04-13 00:45:05.529007 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-13 00:45:05.529012 | orchestrator | Monday 13 April 2026 00:45:01 +0000 (0:00:00.524) 0:01:05.993 ********** 2026-04-13 00:45:05.529018 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:45:05.529023 | orchestrator | 2026-04-13 00:45:05.529028 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-13 00:45:05.529034 | orchestrator | Monday 13 April 2026 00:45:02 +0000 (0:00:00.534) 0:01:06.528 ********** 2026-04-13 00:45:05.529039 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:45:05.529045 | orchestrator | 2026-04-13 00:45:05.529050 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-13 00:45:05.529055 | orchestrator | Monday 13 April 2026 00:45:02 +0000 (0:00:00.371) 0:01:06.899 ********** 2026-04-13 00:45:05.529061 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:05.529066 | orchestrator | 2026-04-13 00:45:05.529072 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-13 00:45:05.529077 | orchestrator | Monday 13 April 2026 00:45:02 +0000 (0:00:00.119) 0:01:07.019 ********** 2026-04-13 00:45:05.529083 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:05.529088 | orchestrator | 2026-04-13 00:45:05.529093 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-13 00:45:05.529099 | orchestrator | Monday 13 April 2026 00:45:02 +0000 (0:00:00.137) 0:01:07.157 ********** 2026-04-13 00:45:05.529104 | orchestrator | ok: [testbed-node-5] => { 2026-04-13 00:45:05.529110 | orchestrator |  "vgs_report": { 2026-04-13 00:45:05.529115 | orchestrator |  "vg": [] 2026-04-13 00:45:05.529133 | orchestrator |  } 2026-04-13 00:45:05.529139 | orchestrator | } 2026-04-13 00:45:05.529145 | orchestrator | 2026-04-13 00:45:05.529150 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-13 00:45:05.529156 | orchestrator | Monday 13 April 2026 00:45:03 +0000 (0:00:00.167) 0:01:07.324 ********** 2026-04-13 00:45:05.529161 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:05.529167 | orchestrator | 2026-04-13 00:45:05.529172 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-13 00:45:05.529177 | orchestrator | Monday 13 April 2026 00:45:03 +0000 (0:00:00.132) 0:01:07.456 ********** 2026-04-13 00:45:05.529183 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:05.529188 | orchestrator | 2026-04-13 00:45:05.529194 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-13 00:45:05.529199 | orchestrator | Monday 13 April 2026 00:45:03 +0000 (0:00:00.144) 0:01:07.601 ********** 2026-04-13 00:45:05.529205 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:05.529210 | orchestrator | 2026-04-13 00:45:05.529215 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-13 00:45:05.529221 | orchestrator | Monday 13 April 2026 00:45:03 +0000 (0:00:00.136) 0:01:07.738 ********** 2026-04-13 00:45:05.529229 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:05.529234 | orchestrator | 2026-04-13 00:45:05.529240 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-13 00:45:05.529245 | orchestrator | Monday 13 April 2026 00:45:03 +0000 (0:00:00.139) 0:01:07.878 ********** 2026-04-13 00:45:05.529251 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:05.529256 | orchestrator | 2026-04-13 00:45:05.529262 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-13 00:45:05.529267 | orchestrator | Monday 13 April 2026 00:45:03 +0000 (0:00:00.135) 0:01:08.013 ********** 2026-04-13 00:45:05.529272 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:05.529278 | orchestrator | 2026-04-13 00:45:05.529287 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-13 00:45:05.529293 | orchestrator | Monday 13 April 2026 00:45:03 +0000 (0:00:00.139) 0:01:08.153 ********** 2026-04-13 00:45:05.529298 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:05.529304 | orchestrator | 2026-04-13 00:45:05.529309 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-13 00:45:05.529314 | orchestrator | Monday 13 April 2026 00:45:03 +0000 (0:00:00.128) 0:01:08.281 ********** 2026-04-13 00:45:05.529320 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:05.529325 | orchestrator | 2026-04-13 00:45:05.529331 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-13 00:45:05.529336 | orchestrator | Monday 13 April 2026 00:45:04 +0000 (0:00:00.149) 0:01:08.431 ********** 2026-04-13 00:45:05.529342 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:05.529347 | orchestrator | 2026-04-13 00:45:05.529352 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-13 00:45:05.529358 | orchestrator | Monday 13 April 2026 00:45:04 +0000 (0:00:00.371) 0:01:08.802 ********** 2026-04-13 00:45:05.529363 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:05.529369 | orchestrator | 2026-04-13 00:45:05.529374 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-13 00:45:05.529380 | orchestrator | Monday 13 April 2026 00:45:04 +0000 (0:00:00.144) 0:01:08.947 ********** 2026-04-13 00:45:05.529385 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:05.529392 | orchestrator | 2026-04-13 00:45:05.529402 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-13 00:45:05.529412 | orchestrator | Monday 13 April 2026 00:45:04 +0000 (0:00:00.145) 0:01:09.092 ********** 2026-04-13 00:45:05.529422 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:05.529431 | orchestrator | 2026-04-13 00:45:05.529441 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-13 00:45:05.529451 | orchestrator | Monday 13 April 2026 00:45:04 +0000 (0:00:00.136) 0:01:09.228 ********** 2026-04-13 00:45:05.529461 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:05.529487 | orchestrator | 2026-04-13 00:45:05.529499 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-13 00:45:05.529505 | orchestrator | Monday 13 April 2026 00:45:05 +0000 (0:00:00.127) 0:01:09.356 ********** 2026-04-13 00:45:05.529510 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:05.529515 | orchestrator | 2026-04-13 00:45:05.529521 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-13 00:45:05.529526 | orchestrator | Monday 13 April 2026 00:45:05 +0000 (0:00:00.124) 0:01:09.480 ********** 2026-04-13 00:45:05.529532 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000', 'data_vg': 'ceph-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000'})  2026-04-13 00:45:05.529537 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7331b6c9-9d3b-5dac-8499-53ee0940f196', 'data_vg': 'ceph-7331b6c9-9d3b-5dac-8499-53ee0940f196'})  2026-04-13 00:45:05.529543 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:05.529548 | orchestrator | 2026-04-13 00:45:05.529553 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-13 00:45:05.529559 | orchestrator | Monday 13 April 2026 00:45:05 +0000 (0:00:00.141) 0:01:09.621 ********** 2026-04-13 00:45:05.529564 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000', 'data_vg': 'ceph-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000'})  2026-04-13 00:45:05.529570 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7331b6c9-9d3b-5dac-8499-53ee0940f196', 'data_vg': 'ceph-7331b6c9-9d3b-5dac-8499-53ee0940f196'})  2026-04-13 00:45:05.529575 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:05.529580 | orchestrator | 2026-04-13 00:45:05.529586 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-13 00:45:05.529591 | orchestrator | Monday 13 April 2026 00:45:05 +0000 (0:00:00.143) 0:01:09.765 ********** 2026-04-13 00:45:05.529607 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000', 'data_vg': 'ceph-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000'})  2026-04-13 00:45:08.658966 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7331b6c9-9d3b-5dac-8499-53ee0940f196', 'data_vg': 'ceph-7331b6c9-9d3b-5dac-8499-53ee0940f196'})  2026-04-13 00:45:08.659095 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:08.659123 | orchestrator | 2026-04-13 00:45:08.659146 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-13 00:45:08.659167 | orchestrator | Monday 13 April 2026 00:45:05 +0000 (0:00:00.167) 0:01:09.933 ********** 2026-04-13 00:45:08.659187 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000', 'data_vg': 'ceph-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000'})  2026-04-13 00:45:08.659201 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7331b6c9-9d3b-5dac-8499-53ee0940f196', 'data_vg': 'ceph-7331b6c9-9d3b-5dac-8499-53ee0940f196'})  2026-04-13 00:45:08.659212 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:08.659223 | orchestrator | 2026-04-13 00:45:08.659234 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-13 00:45:08.659245 | orchestrator | Monday 13 April 2026 00:45:05 +0000 (0:00:00.164) 0:01:10.097 ********** 2026-04-13 00:45:08.659255 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000', 'data_vg': 'ceph-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000'})  2026-04-13 00:45:08.659266 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7331b6c9-9d3b-5dac-8499-53ee0940f196', 'data_vg': 'ceph-7331b6c9-9d3b-5dac-8499-53ee0940f196'})  2026-04-13 00:45:08.659277 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:08.659288 | orchestrator | 2026-04-13 00:45:08.659299 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-13 00:45:08.659311 | orchestrator | Monday 13 April 2026 00:45:05 +0000 (0:00:00.150) 0:01:10.247 ********** 2026-04-13 00:45:08.659329 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000', 'data_vg': 'ceph-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000'})  2026-04-13 00:45:08.659348 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7331b6c9-9d3b-5dac-8499-53ee0940f196', 'data_vg': 'ceph-7331b6c9-9d3b-5dac-8499-53ee0940f196'})  2026-04-13 00:45:08.659389 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:08.659408 | orchestrator | 2026-04-13 00:45:08.659427 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-13 00:45:08.659444 | orchestrator | Monday 13 April 2026 00:45:06 +0000 (0:00:00.139) 0:01:10.387 ********** 2026-04-13 00:45:08.659464 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000', 'data_vg': 'ceph-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000'})  2026-04-13 00:45:08.659519 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7331b6c9-9d3b-5dac-8499-53ee0940f196', 'data_vg': 'ceph-7331b6c9-9d3b-5dac-8499-53ee0940f196'})  2026-04-13 00:45:08.659534 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:08.659547 | orchestrator | 2026-04-13 00:45:08.659560 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-13 00:45:08.659573 | orchestrator | Monday 13 April 2026 00:45:06 +0000 (0:00:00.410) 0:01:10.798 ********** 2026-04-13 00:45:08.659585 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000', 'data_vg': 'ceph-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000'})  2026-04-13 00:45:08.659597 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7331b6c9-9d3b-5dac-8499-53ee0940f196', 'data_vg': 'ceph-7331b6c9-9d3b-5dac-8499-53ee0940f196'})  2026-04-13 00:45:08.659610 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:08.659623 | orchestrator | 2026-04-13 00:45:08.659635 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-13 00:45:08.659672 | orchestrator | Monday 13 April 2026 00:45:06 +0000 (0:00:00.157) 0:01:10.956 ********** 2026-04-13 00:45:08.659685 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:45:08.659698 | orchestrator | 2026-04-13 00:45:08.659710 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-13 00:45:08.659723 | orchestrator | Monday 13 April 2026 00:45:07 +0000 (0:00:00.493) 0:01:11.449 ********** 2026-04-13 00:45:08.659734 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:45:08.659747 | orchestrator | 2026-04-13 00:45:08.659760 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-13 00:45:08.659772 | orchestrator | Monday 13 April 2026 00:45:07 +0000 (0:00:00.537) 0:01:11.986 ********** 2026-04-13 00:45:08.659784 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:45:08.659796 | orchestrator | 2026-04-13 00:45:08.659809 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-13 00:45:08.659822 | orchestrator | Monday 13 April 2026 00:45:07 +0000 (0:00:00.152) 0:01:12.139 ********** 2026-04-13 00:45:08.659836 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-7331b6c9-9d3b-5dac-8499-53ee0940f196', 'vg_name': 'ceph-7331b6c9-9d3b-5dac-8499-53ee0940f196'}) 2026-04-13 00:45:08.659848 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000', 'vg_name': 'ceph-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000'}) 2026-04-13 00:45:08.659858 | orchestrator | 2026-04-13 00:45:08.659869 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-13 00:45:08.659880 | orchestrator | Monday 13 April 2026 00:45:07 +0000 (0:00:00.167) 0:01:12.307 ********** 2026-04-13 00:45:08.659914 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000', 'data_vg': 'ceph-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000'})  2026-04-13 00:45:08.659933 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7331b6c9-9d3b-5dac-8499-53ee0940f196', 'data_vg': 'ceph-7331b6c9-9d3b-5dac-8499-53ee0940f196'})  2026-04-13 00:45:08.659951 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:08.659969 | orchestrator | 2026-04-13 00:45:08.659986 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-13 00:45:08.660004 | orchestrator | Monday 13 April 2026 00:45:08 +0000 (0:00:00.169) 0:01:12.476 ********** 2026-04-13 00:45:08.660031 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000', 'data_vg': 'ceph-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000'})  2026-04-13 00:45:08.660050 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7331b6c9-9d3b-5dac-8499-53ee0940f196', 'data_vg': 'ceph-7331b6c9-9d3b-5dac-8499-53ee0940f196'})  2026-04-13 00:45:08.660068 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:08.660087 | orchestrator | 2026-04-13 00:45:08.660105 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-13 00:45:08.660122 | orchestrator | Monday 13 April 2026 00:45:08 +0000 (0:00:00.147) 0:01:12.624 ********** 2026-04-13 00:45:08.660140 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000', 'data_vg': 'ceph-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000'})  2026-04-13 00:45:08.660158 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7331b6c9-9d3b-5dac-8499-53ee0940f196', 'data_vg': 'ceph-7331b6c9-9d3b-5dac-8499-53ee0940f196'})  2026-04-13 00:45:08.660176 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:08.660196 | orchestrator | 2026-04-13 00:45:08.660214 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-13 00:45:08.660230 | orchestrator | Monday 13 April 2026 00:45:08 +0000 (0:00:00.189) 0:01:12.813 ********** 2026-04-13 00:45:08.660241 | orchestrator | ok: [testbed-node-5] => { 2026-04-13 00:45:08.660252 | orchestrator |  "lvm_report": { 2026-04-13 00:45:08.660263 | orchestrator |  "lv": [ 2026-04-13 00:45:08.660274 | orchestrator |  { 2026-04-13 00:45:08.660299 | orchestrator |  "lv_name": "osd-block-7331b6c9-9d3b-5dac-8499-53ee0940f196", 2026-04-13 00:45:08.660310 | orchestrator |  "vg_name": "ceph-7331b6c9-9d3b-5dac-8499-53ee0940f196" 2026-04-13 00:45:08.660321 | orchestrator |  }, 2026-04-13 00:45:08.660332 | orchestrator |  { 2026-04-13 00:45:08.660342 | orchestrator |  "lv_name": "osd-block-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000", 2026-04-13 00:45:08.660353 | orchestrator |  "vg_name": "ceph-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000" 2026-04-13 00:45:08.660364 | orchestrator |  } 2026-04-13 00:45:08.660375 | orchestrator |  ], 2026-04-13 00:45:08.660385 | orchestrator |  "pv": [ 2026-04-13 00:45:08.660396 | orchestrator |  { 2026-04-13 00:45:08.660406 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-13 00:45:08.660417 | orchestrator |  "vg_name": "ceph-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000" 2026-04-13 00:45:08.660428 | orchestrator |  }, 2026-04-13 00:45:08.660439 | orchestrator |  { 2026-04-13 00:45:08.660450 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-13 00:45:08.660460 | orchestrator |  "vg_name": "ceph-7331b6c9-9d3b-5dac-8499-53ee0940f196" 2026-04-13 00:45:08.660499 | orchestrator |  } 2026-04-13 00:45:08.660512 | orchestrator |  ] 2026-04-13 00:45:08.660523 | orchestrator |  } 2026-04-13 00:45:08.660533 | orchestrator | } 2026-04-13 00:45:08.660545 | orchestrator | 2026-04-13 00:45:08.660555 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:45:08.660566 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-13 00:45:08.660578 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-13 00:45:08.660589 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-13 00:45:08.660600 | orchestrator | 2026-04-13 00:45:08.660610 | orchestrator | 2026-04-13 00:45:08.660621 | orchestrator | 2026-04-13 00:45:08.660633 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:45:08.660643 | orchestrator | Monday 13 April 2026 00:45:08 +0000 (0:00:00.145) 0:01:12.958 ********** 2026-04-13 00:45:08.660654 | orchestrator | =============================================================================== 2026-04-13 00:45:08.660665 | orchestrator | Create block VGs -------------------------------------------------------- 5.54s 2026-04-13 00:45:08.660676 | orchestrator | Create block LVs -------------------------------------------------------- 4.07s 2026-04-13 00:45:08.660686 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.90s 2026-04-13 00:45:08.660697 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.58s 2026-04-13 00:45:08.660707 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.55s 2026-04-13 00:45:08.660718 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.55s 2026-04-13 00:45:08.660729 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.53s 2026-04-13 00:45:08.660740 | orchestrator | Add known partitions to the list of available block devices ------------- 1.50s 2026-04-13 00:45:08.660760 | orchestrator | Add known links to the list of available block devices ------------------ 1.30s 2026-04-13 00:45:09.118902 | orchestrator | Add known partitions to the list of available block devices ------------- 1.05s 2026-04-13 00:45:09.118991 | orchestrator | Print LVM report data --------------------------------------------------- 0.97s 2026-04-13 00:45:09.119002 | orchestrator | Add known partitions to the list of available block devices ------------- 0.91s 2026-04-13 00:45:09.119010 | orchestrator | Create dict of block VGs -> PVs from ceph_osd_devices ------------------- 0.85s 2026-04-13 00:45:09.119018 | orchestrator | Add known links to the list of available block devices ------------------ 0.78s 2026-04-13 00:45:09.119048 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.77s 2026-04-13 00:45:09.119056 | orchestrator | Get initial list of available block devices ----------------------------- 0.74s 2026-04-13 00:45:09.119063 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.72s 2026-04-13 00:45:09.119082 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2026-04-13 00:45:09.119090 | orchestrator | Create DB+WAL VGs ------------------------------------------------------- 0.69s 2026-04-13 00:45:09.119097 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.69s 2026-04-13 00:45:20.653489 | orchestrator | 2026-04-13 00:45:20 | INFO  | Prepare task for execution of facts. 2026-04-13 00:45:20.733147 | orchestrator | 2026-04-13 00:45:20 | INFO  | Task cb833745-6e5c-465b-a469-6d889c651e0d (facts) was prepared for execution. 2026-04-13 00:45:20.733246 | orchestrator | 2026-04-13 00:45:20 | INFO  | It takes a moment until task cb833745-6e5c-465b-a469-6d889c651e0d (facts) has been started and output is visible here. 2026-04-13 00:45:32.316966 | orchestrator | 2026-04-13 00:45:32.317078 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-13 00:45:32.317102 | orchestrator | 2026-04-13 00:45:32.317138 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-13 00:45:32.317150 | orchestrator | Monday 13 April 2026 00:45:24 +0000 (0:00:00.359) 0:00:00.359 ********** 2026-04-13 00:45:32.317161 | orchestrator | ok: [testbed-manager] 2026-04-13 00:45:32.317173 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:45:32.317184 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:45:32.317195 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:45:32.317206 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:45:32.317217 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:45:32.317227 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:45:32.317238 | orchestrator | 2026-04-13 00:45:32.317249 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-13 00:45:32.317260 | orchestrator | Monday 13 April 2026 00:45:25 +0000 (0:00:01.324) 0:00:01.683 ********** 2026-04-13 00:45:32.317271 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:45:32.317283 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:45:32.317293 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:45:32.317304 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:45:32.317340 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:45:32.317352 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:45:32.317362 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:32.317373 | orchestrator | 2026-04-13 00:45:32.317384 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-13 00:45:32.317395 | orchestrator | 2026-04-13 00:45:32.317406 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-13 00:45:32.317417 | orchestrator | Monday 13 April 2026 00:45:26 +0000 (0:00:01.239) 0:00:02.923 ********** 2026-04-13 00:45:32.317428 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:45:32.317439 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:45:32.317449 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:45:32.317486 | orchestrator | ok: [testbed-manager] 2026-04-13 00:45:32.317497 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:45:32.317508 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:45:32.317518 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:45:32.317529 | orchestrator | 2026-04-13 00:45:32.317540 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-13 00:45:32.317550 | orchestrator | 2026-04-13 00:45:32.317561 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-13 00:45:32.317572 | orchestrator | Monday 13 April 2026 00:45:31 +0000 (0:00:04.725) 0:00:07.648 ********** 2026-04-13 00:45:32.317583 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:45:32.317594 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:45:32.317604 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:45:32.317639 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:45:32.317650 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:45:32.317661 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:45:32.317671 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:45:32.317683 | orchestrator | 2026-04-13 00:45:32.317701 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:45:32.317714 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:45:32.317726 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:45:32.317736 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:45:32.317751 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:45:32.317769 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:45:32.317788 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:45:32.317813 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:45:32.317837 | orchestrator | 2026-04-13 00:45:32.317854 | orchestrator | 2026-04-13 00:45:32.317871 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:45:32.317889 | orchestrator | Monday 13 April 2026 00:45:31 +0000 (0:00:00.527) 0:00:08.175 ********** 2026-04-13 00:45:32.317907 | orchestrator | =============================================================================== 2026-04-13 00:45:32.317923 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.73s 2026-04-13 00:45:32.317938 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.32s 2026-04-13 00:45:32.317973 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.24s 2026-04-13 00:45:32.317991 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2026-04-13 00:45:43.888989 | orchestrator | 2026-04-13 00:45:43 | INFO  | Prepare task for execution of frr. 2026-04-13 00:45:43.971376 | orchestrator | 2026-04-13 00:45:43 | INFO  | Task 5056d141-6705-4d39-9d9d-a1abbb635045 (frr) was prepared for execution. 2026-04-13 00:45:43.971482 | orchestrator | 2026-04-13 00:45:43 | INFO  | It takes a moment until task 5056d141-6705-4d39-9d9d-a1abbb635045 (frr) has been started and output is visible here. 2026-04-13 00:46:07.884117 | orchestrator | 2026-04-13 00:46:07.884214 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-04-13 00:46:07.884232 | orchestrator | 2026-04-13 00:46:07.884244 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-04-13 00:46:07.884255 | orchestrator | Monday 13 April 2026 00:45:46 +0000 (0:00:00.236) 0:00:00.236 ********** 2026-04-13 00:46:07.884266 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-04-13 00:46:07.884278 | orchestrator | 2026-04-13 00:46:07.884289 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-04-13 00:46:07.884300 | orchestrator | Monday 13 April 2026 00:45:46 +0000 (0:00:00.163) 0:00:00.400 ********** 2026-04-13 00:46:07.884311 | orchestrator | changed: [testbed-manager] 2026-04-13 00:46:07.884322 | orchestrator | 2026-04-13 00:46:07.884333 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-04-13 00:46:07.884348 | orchestrator | Monday 13 April 2026 00:45:48 +0000 (0:00:01.529) 0:00:01.929 ********** 2026-04-13 00:46:07.884397 | orchestrator | changed: [testbed-manager] 2026-04-13 00:46:07.884417 | orchestrator | 2026-04-13 00:46:07.884475 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-04-13 00:46:07.884487 | orchestrator | Monday 13 April 2026 00:45:57 +0000 (0:00:08.909) 0:00:10.839 ********** 2026-04-13 00:46:07.884498 | orchestrator | ok: [testbed-manager] 2026-04-13 00:46:07.884509 | orchestrator | 2026-04-13 00:46:07.884527 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-04-13 00:46:07.884545 | orchestrator | Monday 13 April 2026 00:45:58 +0000 (0:00:00.984) 0:00:11.823 ********** 2026-04-13 00:46:07.884563 | orchestrator | changed: [testbed-manager] 2026-04-13 00:46:07.884574 | orchestrator | 2026-04-13 00:46:07.884585 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-04-13 00:46:07.884595 | orchestrator | Monday 13 April 2026 00:45:59 +0000 (0:00:00.923) 0:00:12.746 ********** 2026-04-13 00:46:07.884606 | orchestrator | ok: [testbed-manager] 2026-04-13 00:46:07.884616 | orchestrator | 2026-04-13 00:46:07.884627 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-04-13 00:46:07.884638 | orchestrator | Monday 13 April 2026 00:46:00 +0000 (0:00:01.110) 0:00:13.857 ********** 2026-04-13 00:46:07.884651 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:46:07.884668 | orchestrator | 2026-04-13 00:46:07.884679 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-04-13 00:46:07.884690 | orchestrator | Monday 13 April 2026 00:46:00 +0000 (0:00:00.154) 0:00:14.012 ********** 2026-04-13 00:46:07.884701 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:46:07.884711 | orchestrator | 2026-04-13 00:46:07.884723 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-04-13 00:46:07.884733 | orchestrator | Monday 13 April 2026 00:46:00 +0000 (0:00:00.285) 0:00:14.298 ********** 2026-04-13 00:46:07.884744 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:46:07.884754 | orchestrator | 2026-04-13 00:46:07.884765 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-04-13 00:46:07.884776 | orchestrator | Monday 13 April 2026 00:46:00 +0000 (0:00:00.154) 0:00:14.452 ********** 2026-04-13 00:46:07.884786 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:46:07.884797 | orchestrator | 2026-04-13 00:46:07.884808 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-04-13 00:46:07.884818 | orchestrator | Monday 13 April 2026 00:46:00 +0000 (0:00:00.126) 0:00:14.578 ********** 2026-04-13 00:46:07.884829 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:46:07.884839 | orchestrator | 2026-04-13 00:46:07.884850 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-04-13 00:46:07.884861 | orchestrator | Monday 13 April 2026 00:46:01 +0000 (0:00:00.121) 0:00:14.700 ********** 2026-04-13 00:46:07.884872 | orchestrator | changed: [testbed-manager] 2026-04-13 00:46:07.884882 | orchestrator | 2026-04-13 00:46:07.884893 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-04-13 00:46:07.884904 | orchestrator | Monday 13 April 2026 00:46:01 +0000 (0:00:00.850) 0:00:15.551 ********** 2026-04-13 00:46:07.884914 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-04-13 00:46:07.884925 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-04-13 00:46:07.884937 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-04-13 00:46:07.884948 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-04-13 00:46:07.884959 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-04-13 00:46:07.884969 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-04-13 00:46:07.884980 | orchestrator | 2026-04-13 00:46:07.884991 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-04-13 00:46:07.885009 | orchestrator | Monday 13 April 2026 00:46:03 +0000 (0:00:02.109) 0:00:17.661 ********** 2026-04-13 00:46:07.885020 | orchestrator | ok: [testbed-manager] 2026-04-13 00:46:07.885031 | orchestrator | 2026-04-13 00:46:07.885042 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-04-13 00:46:07.885052 | orchestrator | Monday 13 April 2026 00:46:05 +0000 (0:00:01.248) 0:00:18.909 ********** 2026-04-13 00:46:07.885063 | orchestrator | changed: [testbed-manager] 2026-04-13 00:46:07.885074 | orchestrator | 2026-04-13 00:46:07.885084 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:46:07.885095 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-13 00:46:07.885106 | orchestrator | 2026-04-13 00:46:07.885117 | orchestrator | 2026-04-13 00:46:07.885148 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:46:07.885161 | orchestrator | Monday 13 April 2026 00:46:07 +0000 (0:00:02.366) 0:00:21.275 ********** 2026-04-13 00:46:07.885172 | orchestrator | =============================================================================== 2026-04-13 00:46:07.885183 | orchestrator | osism.services.frr : Install frr package -------------------------------- 8.91s 2026-04-13 00:46:07.885194 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 2.37s 2026-04-13 00:46:07.885204 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.11s 2026-04-13 00:46:07.885215 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.53s 2026-04-13 00:46:07.885226 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.25s 2026-04-13 00:46:07.885236 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.11s 2026-04-13 00:46:07.885247 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.98s 2026-04-13 00:46:07.885258 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.92s 2026-04-13 00:46:07.885268 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.85s 2026-04-13 00:46:07.885279 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.29s 2026-04-13 00:46:07.885290 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.16s 2026-04-13 00:46:07.885300 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.15s 2026-04-13 00:46:07.885311 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.15s 2026-04-13 00:46:07.885322 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.13s 2026-04-13 00:46:07.885333 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.12s 2026-04-13 00:46:08.011267 | orchestrator | 2026-04-13 00:46:08.013046 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Mon Apr 13 00:46:08 UTC 2026 2026-04-13 00:46:08.013106 | orchestrator | 2026-04-13 00:46:09.119388 | orchestrator | 2026-04-13 00:46:09 | INFO  | Collection nutshell is prepared for execution 2026-04-13 00:46:09.224622 | orchestrator | 2026-04-13 00:46:09 | INFO  | A [0] - dotfiles 2026-04-13 00:46:19.322253 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [0] - homer 2026-04-13 00:46:19.322316 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [0] - netdata 2026-04-13 00:46:19.322340 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [0] - openstackclient 2026-04-13 00:46:19.322369 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [0] - phpmyadmin 2026-04-13 00:46:19.322383 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [0] - common 2026-04-13 00:46:19.326165 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [1] -- loadbalancer 2026-04-13 00:46:19.326256 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [2] --- opensearch 2026-04-13 00:46:19.326513 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [2] --- mariadb-ng 2026-04-13 00:46:19.326650 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [3] ---- horizon 2026-04-13 00:46:19.326938 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [3] ---- keystone 2026-04-13 00:46:19.327261 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [4] ----- neutron 2026-04-13 00:46:19.327564 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [5] ------ wait-for-nova 2026-04-13 00:46:19.327845 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [6] ------- octavia 2026-04-13 00:46:19.328921 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [4] ----- barbican 2026-04-13 00:46:19.329108 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [4] ----- designate 2026-04-13 00:46:19.329213 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [4] ----- ironic 2026-04-13 00:46:19.329588 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [4] ----- placement 2026-04-13 00:46:19.329794 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [4] ----- magnum 2026-04-13 00:46:19.331341 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [1] -- openvswitch 2026-04-13 00:46:19.331632 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [2] --- ovn 2026-04-13 00:46:19.332029 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [1] -- memcached 2026-04-13 00:46:19.332149 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [1] -- redis 2026-04-13 00:46:19.332444 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [1] -- rabbitmq-ng 2026-04-13 00:46:19.332948 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [0] - kubernetes 2026-04-13 00:46:19.335486 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [1] -- kubeconfig 2026-04-13 00:46:19.335527 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [1] -- copy-kubeconfig 2026-04-13 00:46:19.335993 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [0] - ceph 2026-04-13 00:46:19.337890 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [1] -- ceph-pools 2026-04-13 00:46:19.338115 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [2] --- copy-ceph-keys 2026-04-13 00:46:19.338212 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [3] ---- cephclient 2026-04-13 00:46:19.338224 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-04-13 00:46:19.338240 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [4] ----- wait-for-keystone 2026-04-13 00:46:19.338882 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [5] ------ kolla-ceph-rgw 2026-04-13 00:46:19.339330 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [5] ------ glance 2026-04-13 00:46:19.339367 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [5] ------ cinder 2026-04-13 00:46:19.339386 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [5] ------ nova 2026-04-13 00:46:19.339653 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [4] ----- prometheus 2026-04-13 00:46:19.339685 | orchestrator | 2026-04-13 00:46:19 | INFO  | A [5] ------ grafana 2026-04-13 00:46:19.510081 | orchestrator | 2026-04-13 00:46:19 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-04-13 00:46:19.510163 | orchestrator | 2026-04-13 00:46:19 | INFO  | Tasks are running in the background 2026-04-13 00:46:21.245743 | orchestrator | 2026-04-13 00:46:21 | INFO  | No task IDs specified, wait for all currently running tasks 2026-04-13 00:46:23.436081 | orchestrator | 2026-04-13 00:46:23 | INFO  | Task f5d06f10-a7bf-4911-be2f-a7c788ea310c is in state STARTED 2026-04-13 00:46:23.438714 | orchestrator | 2026-04-13 00:46:23 | INFO  | Task ed67048e-205f-41f2-9b51-3037b7873fd2 is in state STARTED 2026-04-13 00:46:23.442638 | orchestrator | 2026-04-13 00:46:23 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:46:23.443619 | orchestrator | 2026-04-13 00:46:23 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:46:23.444224 | orchestrator | 2026-04-13 00:46:23 | INFO  | Task 5d402ac7-3b85-470a-a2ca-9220ac0011ed is in state STARTED 2026-04-13 00:46:23.445006 | orchestrator | 2026-04-13 00:46:23 | INFO  | Task 3feb1f0c-d775-469a-9d20-f2683021cf3c is in state STARTED 2026-04-13 00:46:23.447266 | orchestrator | 2026-04-13 00:46:23 | INFO  | Task 1820aa86-d8b2-43f4-ba8a-ded4f7b5c0ae is in state STARTED 2026-04-13 00:46:23.447316 | orchestrator | 2026-04-13 00:46:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:46:26.507380 | orchestrator | 2026-04-13 00:46:26 | INFO  | Task f5d06f10-a7bf-4911-be2f-a7c788ea310c is in state STARTED 2026-04-13 00:46:26.510935 | orchestrator | 2026-04-13 00:46:26 | INFO  | Task ed67048e-205f-41f2-9b51-3037b7873fd2 is in state STARTED 2026-04-13 00:46:26.512314 | orchestrator | 2026-04-13 00:46:26 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:46:26.515503 | orchestrator | 2026-04-13 00:46:26 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:46:26.515929 | orchestrator | 2026-04-13 00:46:26 | INFO  | Task 5d402ac7-3b85-470a-a2ca-9220ac0011ed is in state STARTED 2026-04-13 00:46:26.516707 | orchestrator | 2026-04-13 00:46:26 | INFO  | Task 3feb1f0c-d775-469a-9d20-f2683021cf3c is in state STARTED 2026-04-13 00:46:26.517353 | orchestrator | 2026-04-13 00:46:26 | INFO  | Task 1820aa86-d8b2-43f4-ba8a-ded4f7b5c0ae is in state STARTED 2026-04-13 00:46:26.517380 | orchestrator | 2026-04-13 00:46:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:46:29.549039 | orchestrator | 2026-04-13 00:46:29 | INFO  | Task f5d06f10-a7bf-4911-be2f-a7c788ea310c is in state STARTED 2026-04-13 00:46:29.549401 | orchestrator | 2026-04-13 00:46:29 | INFO  | Task ed67048e-205f-41f2-9b51-3037b7873fd2 is in state STARTED 2026-04-13 00:46:29.551699 | orchestrator | 2026-04-13 00:46:29 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:46:29.556456 | orchestrator | 2026-04-13 00:46:29 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:46:29.557002 | orchestrator | 2026-04-13 00:46:29 | INFO  | Task 5d402ac7-3b85-470a-a2ca-9220ac0011ed is in state STARTED 2026-04-13 00:46:29.557386 | orchestrator | 2026-04-13 00:46:29 | INFO  | Task 3feb1f0c-d775-469a-9d20-f2683021cf3c is in state STARTED 2026-04-13 00:46:29.558358 | orchestrator | 2026-04-13 00:46:29 | INFO  | Task 1820aa86-d8b2-43f4-ba8a-ded4f7b5c0ae is in state STARTED 2026-04-13 00:46:29.558406 | orchestrator | 2026-04-13 00:46:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:46:32.879181 | orchestrator | 2026-04-13 00:46:32 | INFO  | Task f5d06f10-a7bf-4911-be2f-a7c788ea310c is in state STARTED 2026-04-13 00:46:32.879285 | orchestrator | 2026-04-13 00:46:32 | INFO  | Task ed67048e-205f-41f2-9b51-3037b7873fd2 is in state STARTED 2026-04-13 00:46:32.879309 | orchestrator | 2026-04-13 00:46:32 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:46:32.879327 | orchestrator | 2026-04-13 00:46:32 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:46:32.879345 | orchestrator | 2026-04-13 00:46:32 | INFO  | Task 5d402ac7-3b85-470a-a2ca-9220ac0011ed is in state STARTED 2026-04-13 00:46:32.879363 | orchestrator | 2026-04-13 00:46:32 | INFO  | Task 3feb1f0c-d775-469a-9d20-f2683021cf3c is in state STARTED 2026-04-13 00:46:32.879379 | orchestrator | 2026-04-13 00:46:32 | INFO  | Task 1820aa86-d8b2-43f4-ba8a-ded4f7b5c0ae is in state STARTED 2026-04-13 00:46:32.879485 | orchestrator | 2026-04-13 00:46:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:46:35.930593 | orchestrator | 2026-04-13 00:46:35 | INFO  | Task f5d06f10-a7bf-4911-be2f-a7c788ea310c is in state STARTED 2026-04-13 00:46:35.930658 | orchestrator | 2026-04-13 00:46:35 | INFO  | Task ed67048e-205f-41f2-9b51-3037b7873fd2 is in state STARTED 2026-04-13 00:46:35.930670 | orchestrator | 2026-04-13 00:46:35 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:46:35.930679 | orchestrator | 2026-04-13 00:46:35 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:46:35.932260 | orchestrator | 2026-04-13 00:46:35 | INFO  | Task 5d402ac7-3b85-470a-a2ca-9220ac0011ed is in state STARTED 2026-04-13 00:46:35.932293 | orchestrator | 2026-04-13 00:46:35 | INFO  | Task 3feb1f0c-d775-469a-9d20-f2683021cf3c is in state STARTED 2026-04-13 00:46:35.934291 | orchestrator | 2026-04-13 00:46:35 | INFO  | Task 1820aa86-d8b2-43f4-ba8a-ded4f7b5c0ae is in state STARTED 2026-04-13 00:46:35.934350 | orchestrator | 2026-04-13 00:46:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:46:39.034521 | orchestrator | 2026-04-13 00:46:39 | INFO  | Task f5d06f10-a7bf-4911-be2f-a7c788ea310c is in state STARTED 2026-04-13 00:46:39.044746 | orchestrator | 2026-04-13 00:46:39 | INFO  | Task ed67048e-205f-41f2-9b51-3037b7873fd2 is in state STARTED 2026-04-13 00:46:39.047850 | orchestrator | 2026-04-13 00:46:39 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:46:39.048700 | orchestrator | 2026-04-13 00:46:39 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:46:39.051969 | orchestrator | 2026-04-13 00:46:39 | INFO  | Task 5d402ac7-3b85-470a-a2ca-9220ac0011ed is in state STARTED 2026-04-13 00:46:39.057602 | orchestrator | 2026-04-13 00:46:39 | INFO  | Task 3feb1f0c-d775-469a-9d20-f2683021cf3c is in state STARTED 2026-04-13 00:46:39.062719 | orchestrator | 2026-04-13 00:46:39 | INFO  | Task 1820aa86-d8b2-43f4-ba8a-ded4f7b5c0ae is in state STARTED 2026-04-13 00:46:39.062779 | orchestrator | 2026-04-13 00:46:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:46:42.126801 | orchestrator | 2026-04-13 00:46:42 | INFO  | Task f5d06f10-a7bf-4911-be2f-a7c788ea310c is in state STARTED 2026-04-13 00:46:42.128964 | orchestrator | 2026-04-13 00:46:42 | INFO  | Task ed67048e-205f-41f2-9b51-3037b7873fd2 is in state STARTED 2026-04-13 00:46:42.144178 | orchestrator | 2026-04-13 00:46:42 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:46:42.144317 | orchestrator | 2026-04-13 00:46:42 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:46:42.144344 | orchestrator | 2026-04-13 00:46:42 | INFO  | Task 5d402ac7-3b85-470a-a2ca-9220ac0011ed is in state STARTED 2026-04-13 00:46:42.144366 | orchestrator | 2026-04-13 00:46:42 | INFO  | Task 3feb1f0c-d775-469a-9d20-f2683021cf3c is in state STARTED 2026-04-13 00:46:42.144384 | orchestrator | 2026-04-13 00:46:42 | INFO  | Task 1820aa86-d8b2-43f4-ba8a-ded4f7b5c0ae is in state STARTED 2026-04-13 00:46:42.144448 | orchestrator | 2026-04-13 00:46:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:46:45.216855 | orchestrator | 2026-04-13 00:46:45.216946 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-04-13 00:46:45.216962 | orchestrator | 2026-04-13 00:46:45.216972 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-04-13 00:46:45.216988 | orchestrator | Monday 13 April 2026 00:46:29 +0000 (0:00:00.757) 0:00:00.757 ********** 2026-04-13 00:46:45.217016 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:46:45.217027 | orchestrator | changed: [testbed-manager] 2026-04-13 00:46:45.217036 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:46:45.217044 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:46:45.217051 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:46:45.217059 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:46:45.217067 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:46:45.217075 | orchestrator | 2026-04-13 00:46:45.217084 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-04-13 00:46:45.217092 | orchestrator | Monday 13 April 2026 00:46:35 +0000 (0:00:05.535) 0:00:06.293 ********** 2026-04-13 00:46:45.217101 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-13 00:46:45.217110 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-13 00:46:45.217119 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-13 00:46:45.217127 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-13 00:46:45.217135 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-13 00:46:45.217143 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-13 00:46:45.217152 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-13 00:46:45.217161 | orchestrator | 2026-04-13 00:46:45.217169 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-04-13 00:46:45.217181 | orchestrator | Monday 13 April 2026 00:46:37 +0000 (0:00:02.447) 0:00:08.740 ********** 2026-04-13 00:46:45.217194 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-13 00:46:36.743511', 'end': '2026-04-13 00:46:36.750440', 'delta': '0:00:00.006929', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-13 00:46:45.217205 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-13 00:46:36.856884', 'end': '2026-04-13 00:46:36.868269', 'delta': '0:00:00.011385', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-13 00:46:45.217215 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-13 00:46:36.925934', 'end': '2026-04-13 00:46:36.932479', 'delta': '0:00:00.006545', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-13 00:46:45.217278 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-13 00:46:36.873763', 'end': '2026-04-13 00:46:36.879373', 'delta': '0:00:00.005610', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-13 00:46:45.217300 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-13 00:46:36.683553', 'end': '2026-04-13 00:46:36.687792', 'delta': '0:00:00.004239', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-13 00:46:45.217311 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-13 00:46:37.343586', 'end': '2026-04-13 00:46:37.348080', 'delta': '0:00:00.004494', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-13 00:46:45.217320 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-13 00:46:37.616209', 'end': '2026-04-13 00:46:37.622169', 'delta': '0:00:00.005960', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-13 00:46:45.217329 | orchestrator | 2026-04-13 00:46:45.217338 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-04-13 00:46:45.217346 | orchestrator | Monday 13 April 2026 00:46:39 +0000 (0:00:01.934) 0:00:10.675 ********** 2026-04-13 00:46:45.217354 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-13 00:46:45.217363 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-13 00:46:45.217370 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-13 00:46:45.217379 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-13 00:46:45.217387 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-13 00:46:45.217429 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-13 00:46:45.217438 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-13 00:46:45.217447 | orchestrator | 2026-04-13 00:46:45.217455 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-04-13 00:46:45.217463 | orchestrator | Monday 13 April 2026 00:46:41 +0000 (0:00:01.877) 0:00:12.552 ********** 2026-04-13 00:46:45.217471 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-04-13 00:46:45.217479 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-04-13 00:46:45.217487 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-04-13 00:46:45.217496 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-04-13 00:46:45.217505 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-04-13 00:46:45.217514 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-04-13 00:46:45.217523 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-04-13 00:46:45.217532 | orchestrator | 2026-04-13 00:46:45.217540 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:46:45.217557 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:46:45.217568 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:46:45.217577 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:46:45.217585 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:46:45.217593 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:46:45.217602 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:46:45.217611 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:46:45.217620 | orchestrator | 2026-04-13 00:46:45.217629 | orchestrator | 2026-04-13 00:46:45.217638 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:46:45.217647 | orchestrator | Monday 13 April 2026 00:46:43 +0000 (0:00:02.308) 0:00:14.861 ********** 2026-04-13 00:46:45.217656 | orchestrator | =============================================================================== 2026-04-13 00:46:45.217665 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 5.54s 2026-04-13 00:46:45.217675 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.45s 2026-04-13 00:46:45.217684 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.31s 2026-04-13 00:46:45.217692 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.93s 2026-04-13 00:46:45.217701 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.88s 2026-04-13 00:46:45.217711 | orchestrator | 2026-04-13 00:46:45 | INFO  | Task f5d06f10-a7bf-4911-be2f-a7c788ea310c is in state SUCCESS 2026-04-13 00:46:45.217720 | orchestrator | 2026-04-13 00:46:45 | INFO  | Task ed67048e-205f-41f2-9b51-3037b7873fd2 is in state STARTED 2026-04-13 00:46:45.217993 | orchestrator | 2026-04-13 00:46:45 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:46:45.222060 | orchestrator | 2026-04-13 00:46:45 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:46:45.224323 | orchestrator | 2026-04-13 00:46:45 | INFO  | Task 5d402ac7-3b85-470a-a2ca-9220ac0011ed is in state STARTED 2026-04-13 00:46:45.225176 | orchestrator | 2026-04-13 00:46:45 | INFO  | Task 3feb1f0c-d775-469a-9d20-f2683021cf3c is in state STARTED 2026-04-13 00:46:45.226206 | orchestrator | 2026-04-13 00:46:45 | INFO  | Task 1820aa86-d8b2-43f4-ba8a-ded4f7b5c0ae is in state STARTED 2026-04-13 00:46:45.226231 | orchestrator | 2026-04-13 00:46:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:46:48.313803 | orchestrator | 2026-04-13 00:46:48 | INFO  | Task ed67048e-205f-41f2-9b51-3037b7873fd2 is in state STARTED 2026-04-13 00:46:48.313875 | orchestrator | 2026-04-13 00:46:48 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:46:48.313884 | orchestrator | 2026-04-13 00:46:48 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:46:48.315484 | orchestrator | 2026-04-13 00:46:48 | INFO  | Task 5d402ac7-3b85-470a-a2ca-9220ac0011ed is in state STARTED 2026-04-13 00:46:48.321627 | orchestrator | 2026-04-13 00:46:48 | INFO  | Task 3feb1f0c-d775-469a-9d20-f2683021cf3c is in state STARTED 2026-04-13 00:46:48.321709 | orchestrator | 2026-04-13 00:46:48 | INFO  | Task 2692e777-ad58-497d-ba50-fb7ac91b8c2b is in state STARTED 2026-04-13 00:46:48.321731 | orchestrator | 2026-04-13 00:46:48 | INFO  | Task 1820aa86-d8b2-43f4-ba8a-ded4f7b5c0ae is in state STARTED 2026-04-13 00:46:48.321752 | orchestrator | 2026-04-13 00:46:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:46:51.485943 | orchestrator | 2026-04-13 00:46:51 | INFO  | Task ed67048e-205f-41f2-9b51-3037b7873fd2 is in state STARTED 2026-04-13 00:46:51.486109 | orchestrator | 2026-04-13 00:46:51 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:46:51.486133 | orchestrator | 2026-04-13 00:46:51 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:46:51.488723 | orchestrator | 2026-04-13 00:46:51 | INFO  | Task 5d402ac7-3b85-470a-a2ca-9220ac0011ed is in state STARTED 2026-04-13 00:46:51.488802 | orchestrator | 2026-04-13 00:46:51 | INFO  | Task 3feb1f0c-d775-469a-9d20-f2683021cf3c is in state STARTED 2026-04-13 00:46:51.488830 | orchestrator | 2026-04-13 00:46:51 | INFO  | Task 2692e777-ad58-497d-ba50-fb7ac91b8c2b is in state STARTED 2026-04-13 00:46:51.489207 | orchestrator | 2026-04-13 00:46:51 | INFO  | Task 1820aa86-d8b2-43f4-ba8a-ded4f7b5c0ae is in state STARTED 2026-04-13 00:46:51.489248 | orchestrator | 2026-04-13 00:46:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:46:54.547111 | orchestrator | 2026-04-13 00:46:54 | INFO  | Task ed67048e-205f-41f2-9b51-3037b7873fd2 is in state STARTED 2026-04-13 00:46:54.547204 | orchestrator | 2026-04-13 00:46:54 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:46:54.547219 | orchestrator | 2026-04-13 00:46:54 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:46:54.551247 | orchestrator | 2026-04-13 00:46:54 | INFO  | Task 5d402ac7-3b85-470a-a2ca-9220ac0011ed is in state STARTED 2026-04-13 00:46:54.560628 | orchestrator | 2026-04-13 00:46:54 | INFO  | Task 3feb1f0c-d775-469a-9d20-f2683021cf3c is in state STARTED 2026-04-13 00:46:54.560683 | orchestrator | 2026-04-13 00:46:54 | INFO  | Task 2692e777-ad58-497d-ba50-fb7ac91b8c2b is in state STARTED 2026-04-13 00:46:54.562485 | orchestrator | 2026-04-13 00:46:54 | INFO  | Task 1820aa86-d8b2-43f4-ba8a-ded4f7b5c0ae is in state STARTED 2026-04-13 00:46:54.562514 | orchestrator | 2026-04-13 00:46:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:46:57.647038 | orchestrator | 2026-04-13 00:46:57 | INFO  | Task ed67048e-205f-41f2-9b51-3037b7873fd2 is in state STARTED 2026-04-13 00:46:57.648588 | orchestrator | 2026-04-13 00:46:57 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:46:57.649112 | orchestrator | 2026-04-13 00:46:57 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:46:57.652823 | orchestrator | 2026-04-13 00:46:57 | INFO  | Task 5d402ac7-3b85-470a-a2ca-9220ac0011ed is in state STARTED 2026-04-13 00:46:57.652879 | orchestrator | 2026-04-13 00:46:57 | INFO  | Task 3feb1f0c-d775-469a-9d20-f2683021cf3c is in state STARTED 2026-04-13 00:46:57.654343 | orchestrator | 2026-04-13 00:46:57 | INFO  | Task 2692e777-ad58-497d-ba50-fb7ac91b8c2b is in state STARTED 2026-04-13 00:46:57.667524 | orchestrator | 2026-04-13 00:46:57 | INFO  | Task 1820aa86-d8b2-43f4-ba8a-ded4f7b5c0ae is in state STARTED 2026-04-13 00:46:57.667573 | orchestrator | 2026-04-13 00:46:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:00.702792 | orchestrator | 2026-04-13 00:47:00 | INFO  | Task ed67048e-205f-41f2-9b51-3037b7873fd2 is in state STARTED 2026-04-13 00:47:00.709131 | orchestrator | 2026-04-13 00:47:00 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:47:00.746139 | orchestrator | 2026-04-13 00:47:00 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:47:00.746778 | orchestrator | 2026-04-13 00:47:00 | INFO  | Task 5d402ac7-3b85-470a-a2ca-9220ac0011ed is in state STARTED 2026-04-13 00:47:00.747688 | orchestrator | 2026-04-13 00:47:00 | INFO  | Task 3feb1f0c-d775-469a-9d20-f2683021cf3c is in state STARTED 2026-04-13 00:47:00.749561 | orchestrator | 2026-04-13 00:47:00 | INFO  | Task 2692e777-ad58-497d-ba50-fb7ac91b8c2b is in state STARTED 2026-04-13 00:47:00.751114 | orchestrator | 2026-04-13 00:47:00 | INFO  | Task 1820aa86-d8b2-43f4-ba8a-ded4f7b5c0ae is in state STARTED 2026-04-13 00:47:00.751510 | orchestrator | 2026-04-13 00:47:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:03.822217 | orchestrator | 2026-04-13 00:47:03 | INFO  | Task ed67048e-205f-41f2-9b51-3037b7873fd2 is in state STARTED 2026-04-13 00:47:03.826210 | orchestrator | 2026-04-13 00:47:03 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:47:03.837203 | orchestrator | 2026-04-13 00:47:03 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:47:03.845564 | orchestrator | 2026-04-13 00:47:03 | INFO  | Task 5d402ac7-3b85-470a-a2ca-9220ac0011ed is in state STARTED 2026-04-13 00:47:03.863154 | orchestrator | 2026-04-13 00:47:03 | INFO  | Task 3feb1f0c-d775-469a-9d20-f2683021cf3c is in state STARTED 2026-04-13 00:47:03.873613 | orchestrator | 2026-04-13 00:47:03 | INFO  | Task 2692e777-ad58-497d-ba50-fb7ac91b8c2b is in state STARTED 2026-04-13 00:47:03.893916 | orchestrator | 2026-04-13 00:47:03 | INFO  | Task 1820aa86-d8b2-43f4-ba8a-ded4f7b5c0ae is in state STARTED 2026-04-13 00:47:03.896172 | orchestrator | 2026-04-13 00:47:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:06.992651 | orchestrator | 2026-04-13 00:47:06 | INFO  | Task ed67048e-205f-41f2-9b51-3037b7873fd2 is in state STARTED 2026-04-13 00:47:06.999670 | orchestrator | 2026-04-13 00:47:07 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:47:07.004420 | orchestrator | 2026-04-13 00:47:07 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:47:07.010480 | orchestrator | 2026-04-13 00:47:07 | INFO  | Task 5d402ac7-3b85-470a-a2ca-9220ac0011ed is in state STARTED 2026-04-13 00:47:07.015294 | orchestrator | 2026-04-13 00:47:07 | INFO  | Task 3feb1f0c-d775-469a-9d20-f2683021cf3c is in state STARTED 2026-04-13 00:47:07.020932 | orchestrator | 2026-04-13 00:47:07 | INFO  | Task 2692e777-ad58-497d-ba50-fb7ac91b8c2b is in state STARTED 2026-04-13 00:47:07.027495 | orchestrator | 2026-04-13 00:47:07 | INFO  | Task 1820aa86-d8b2-43f4-ba8a-ded4f7b5c0ae is in state STARTED 2026-04-13 00:47:07.028823 | orchestrator | 2026-04-13 00:47:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:10.293012 | orchestrator | 2026-04-13 00:47:10 | INFO  | Task ed67048e-205f-41f2-9b51-3037b7873fd2 is in state STARTED 2026-04-13 00:47:10.293111 | orchestrator | 2026-04-13 00:47:10 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:47:10.293126 | orchestrator | 2026-04-13 00:47:10 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:47:10.293137 | orchestrator | 2026-04-13 00:47:10 | INFO  | Task 5d402ac7-3b85-470a-a2ca-9220ac0011ed is in state STARTED 2026-04-13 00:47:10.293147 | orchestrator | 2026-04-13 00:47:10 | INFO  | Task 3feb1f0c-d775-469a-9d20-f2683021cf3c is in state STARTED 2026-04-13 00:47:10.293157 | orchestrator | 2026-04-13 00:47:10 | INFO  | Task 2692e777-ad58-497d-ba50-fb7ac91b8c2b is in state STARTED 2026-04-13 00:47:10.293167 | orchestrator | 2026-04-13 00:47:10 | INFO  | Task 1820aa86-d8b2-43f4-ba8a-ded4f7b5c0ae is in state SUCCESS 2026-04-13 00:47:10.293176 | orchestrator | 2026-04-13 00:47:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:13.606254 | orchestrator | 2026-04-13 00:47:13 | INFO  | Task ed67048e-205f-41f2-9b51-3037b7873fd2 is in state STARTED 2026-04-13 00:47:13.606355 | orchestrator | 2026-04-13 00:47:13 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:47:13.606367 | orchestrator | 2026-04-13 00:47:13 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:47:13.606416 | orchestrator | 2026-04-13 00:47:13 | INFO  | Task 5d402ac7-3b85-470a-a2ca-9220ac0011ed is in state STARTED 2026-04-13 00:47:13.606425 | orchestrator | 2026-04-13 00:47:13 | INFO  | Task 3feb1f0c-d775-469a-9d20-f2683021cf3c is in state STARTED 2026-04-13 00:47:13.606432 | orchestrator | 2026-04-13 00:47:13 | INFO  | Task 2692e777-ad58-497d-ba50-fb7ac91b8c2b is in state STARTED 2026-04-13 00:47:13.606440 | orchestrator | 2026-04-13 00:47:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:16.801433 | orchestrator | 2026-04-13 00:47:16 | INFO  | Task ed67048e-205f-41f2-9b51-3037b7873fd2 is in state STARTED 2026-04-13 00:47:16.801558 | orchestrator | 2026-04-13 00:47:16 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:47:16.801572 | orchestrator | 2026-04-13 00:47:16 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:47:16.801582 | orchestrator | 2026-04-13 00:47:16 | INFO  | Task 5d402ac7-3b85-470a-a2ca-9220ac0011ed is in state STARTED 2026-04-13 00:47:16.801591 | orchestrator | 2026-04-13 00:47:16 | INFO  | Task 3feb1f0c-d775-469a-9d20-f2683021cf3c is in state STARTED 2026-04-13 00:47:16.801600 | orchestrator | 2026-04-13 00:47:16 | INFO  | Task 2692e777-ad58-497d-ba50-fb7ac91b8c2b is in state STARTED 2026-04-13 00:47:16.801609 | orchestrator | 2026-04-13 00:47:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:19.749990 | orchestrator | 2026-04-13 00:47:19 | INFO  | Task ed67048e-205f-41f2-9b51-3037b7873fd2 is in state STARTED 2026-04-13 00:47:19.750139 | orchestrator | 2026-04-13 00:47:19 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:47:19.750153 | orchestrator | 2026-04-13 00:47:19 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:47:19.750189 | orchestrator | 2026-04-13 00:47:19 | INFO  | Task 5d402ac7-3b85-470a-a2ca-9220ac0011ed is in state STARTED 2026-04-13 00:47:19.750199 | orchestrator | 2026-04-13 00:47:19 | INFO  | Task 3feb1f0c-d775-469a-9d20-f2683021cf3c is in state STARTED 2026-04-13 00:47:19.750209 | orchestrator | 2026-04-13 00:47:19 | INFO  | Task 2692e777-ad58-497d-ba50-fb7ac91b8c2b is in state STARTED 2026-04-13 00:47:19.750219 | orchestrator | 2026-04-13 00:47:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:22.836128 | orchestrator | 2026-04-13 00:47:22 | INFO  | Task ed67048e-205f-41f2-9b51-3037b7873fd2 is in state SUCCESS 2026-04-13 00:47:22.836996 | orchestrator | 2026-04-13 00:47:22 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:47:22.838529 | orchestrator | 2026-04-13 00:47:22 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:47:22.839720 | orchestrator | 2026-04-13 00:47:22 | INFO  | Task 5d402ac7-3b85-470a-a2ca-9220ac0011ed is in state STARTED 2026-04-13 00:47:22.840912 | orchestrator | 2026-04-13 00:47:22 | INFO  | Task 3feb1f0c-d775-469a-9d20-f2683021cf3c is in state STARTED 2026-04-13 00:47:22.841994 | orchestrator | 2026-04-13 00:47:22 | INFO  | Task 2692e777-ad58-497d-ba50-fb7ac91b8c2b is in state STARTED 2026-04-13 00:47:22.842066 | orchestrator | 2026-04-13 00:47:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:25.917414 | orchestrator | 2026-04-13 00:47:25 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:47:25.917505 | orchestrator | 2026-04-13 00:47:25 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:47:25.917522 | orchestrator | 2026-04-13 00:47:25 | INFO  | Task 5d402ac7-3b85-470a-a2ca-9220ac0011ed is in state STARTED 2026-04-13 00:47:25.917749 | orchestrator | 2026-04-13 00:47:25 | INFO  | Task 3feb1f0c-d775-469a-9d20-f2683021cf3c is in state STARTED 2026-04-13 00:47:25.918965 | orchestrator | 2026-04-13 00:47:25 | INFO  | Task 2692e777-ad58-497d-ba50-fb7ac91b8c2b is in state STARTED 2026-04-13 00:47:25.919045 | orchestrator | 2026-04-13 00:47:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:28.971279 | orchestrator | 2026-04-13 00:47:28 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:47:28.973442 | orchestrator | 2026-04-13 00:47:28 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:47:28.974942 | orchestrator | 2026-04-13 00:47:28 | INFO  | Task 5d402ac7-3b85-470a-a2ca-9220ac0011ed is in state STARTED 2026-04-13 00:47:28.977432 | orchestrator | 2026-04-13 00:47:28 | INFO  | Task 3feb1f0c-d775-469a-9d20-f2683021cf3c is in state STARTED 2026-04-13 00:47:28.978790 | orchestrator | 2026-04-13 00:47:28 | INFO  | Task 2692e777-ad58-497d-ba50-fb7ac91b8c2b is in state STARTED 2026-04-13 00:47:28.978842 | orchestrator | 2026-04-13 00:47:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:32.018416 | orchestrator | 2026-04-13 00:47:32 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:47:32.020576 | orchestrator | 2026-04-13 00:47:32 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:47:32.021815 | orchestrator | 2026-04-13 00:47:32 | INFO  | Task 5d402ac7-3b85-470a-a2ca-9220ac0011ed is in state STARTED 2026-04-13 00:47:32.023012 | orchestrator | 2026-04-13 00:47:32 | INFO  | Task 3feb1f0c-d775-469a-9d20-f2683021cf3c is in state STARTED 2026-04-13 00:47:32.024491 | orchestrator | 2026-04-13 00:47:32 | INFO  | Task 2692e777-ad58-497d-ba50-fb7ac91b8c2b is in state STARTED 2026-04-13 00:47:32.024591 | orchestrator | 2026-04-13 00:47:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:35.076934 | orchestrator | 2026-04-13 00:47:35 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:47:35.077773 | orchestrator | 2026-04-13 00:47:35 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:47:35.078801 | orchestrator | 2026-04-13 00:47:35 | INFO  | Task 5d402ac7-3b85-470a-a2ca-9220ac0011ed is in state STARTED 2026-04-13 00:47:35.079513 | orchestrator | 2026-04-13 00:47:35 | INFO  | Task 3feb1f0c-d775-469a-9d20-f2683021cf3c is in state STARTED 2026-04-13 00:47:35.080081 | orchestrator | 2026-04-13 00:47:35 | INFO  | Task 2692e777-ad58-497d-ba50-fb7ac91b8c2b is in state STARTED 2026-04-13 00:47:35.080306 | orchestrator | 2026-04-13 00:47:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:38.112485 | orchestrator | 2026-04-13 00:47:38 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:47:38.113630 | orchestrator | 2026-04-13 00:47:38 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:47:38.115630 | orchestrator | 2026-04-13 00:47:38 | INFO  | Task 5d402ac7-3b85-470a-a2ca-9220ac0011ed is in state STARTED 2026-04-13 00:47:38.116454 | orchestrator | 2026-04-13 00:47:38 | INFO  | Task 3feb1f0c-d775-469a-9d20-f2683021cf3c is in state STARTED 2026-04-13 00:47:38.118277 | orchestrator | 2026-04-13 00:47:38 | INFO  | Task 2692e777-ad58-497d-ba50-fb7ac91b8c2b is in state STARTED 2026-04-13 00:47:38.118434 | orchestrator | 2026-04-13 00:47:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:41.160069 | orchestrator | 2026-04-13 00:47:41 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:47:41.160955 | orchestrator | 2026-04-13 00:47:41 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:47:41.162454 | orchestrator | 2026-04-13 00:47:41 | INFO  | Task 5d402ac7-3b85-470a-a2ca-9220ac0011ed is in state STARTED 2026-04-13 00:47:41.164129 | orchestrator | 2026-04-13 00:47:41 | INFO  | Task 3feb1f0c-d775-469a-9d20-f2683021cf3c is in state STARTED 2026-04-13 00:47:41.165446 | orchestrator | 2026-04-13 00:47:41 | INFO  | Task 2692e777-ad58-497d-ba50-fb7ac91b8c2b is in state STARTED 2026-04-13 00:47:41.165480 | orchestrator | 2026-04-13 00:47:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:44.196454 | orchestrator | 2026-04-13 00:47:44 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:47:44.197290 | orchestrator | 2026-04-13 00:47:44 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:47:44.200010 | orchestrator | 2026-04-13 00:47:44 | INFO  | Task 5d402ac7-3b85-470a-a2ca-9220ac0011ed is in state STARTED 2026-04-13 00:47:44.202546 | orchestrator | 2026-04-13 00:47:44 | INFO  | Task 3feb1f0c-d775-469a-9d20-f2683021cf3c is in state STARTED 2026-04-13 00:47:44.203637 | orchestrator | 2026-04-13 00:47:44 | INFO  | Task 2692e777-ad58-497d-ba50-fb7ac91b8c2b is in state STARTED 2026-04-13 00:47:44.204269 | orchestrator | 2026-04-13 00:47:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:47.256020 | orchestrator | 2026-04-13 00:47:47 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:47:47.263628 | orchestrator | 2026-04-13 00:47:47 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:47:47.272931 | orchestrator | 2026-04-13 00:47:47.273057 | orchestrator | 2026-04-13 00:47:47.273067 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-04-13 00:47:47.273073 | orchestrator | 2026-04-13 00:47:47.273091 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-04-13 00:47:47.273097 | orchestrator | Monday 13 April 2026 00:46:28 +0000 (0:00:00.270) 0:00:00.270 ********** 2026-04-13 00:47:47.273103 | orchestrator | ok: [testbed-manager] => { 2026-04-13 00:47:47.273112 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-04-13 00:47:47.273117 | orchestrator | } 2026-04-13 00:47:47.273121 | orchestrator | 2026-04-13 00:47:47.273125 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-04-13 00:47:47.273129 | orchestrator | Monday 13 April 2026 00:46:29 +0000 (0:00:00.217) 0:00:00.488 ********** 2026-04-13 00:47:47.273133 | orchestrator | ok: [testbed-manager] 2026-04-13 00:47:47.273138 | orchestrator | 2026-04-13 00:47:47.273142 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-04-13 00:47:47.273146 | orchestrator | Monday 13 April 2026 00:46:32 +0000 (0:00:03.809) 0:00:04.297 ********** 2026-04-13 00:47:47.273150 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-04-13 00:47:47.273155 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-04-13 00:47:47.273158 | orchestrator | 2026-04-13 00:47:47.273162 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-04-13 00:47:47.273166 | orchestrator | Monday 13 April 2026 00:46:34 +0000 (0:00:01.768) 0:00:06.066 ********** 2026-04-13 00:47:47.273170 | orchestrator | changed: [testbed-manager] 2026-04-13 00:47:47.273174 | orchestrator | 2026-04-13 00:47:47.273177 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-04-13 00:47:47.273181 | orchestrator | Monday 13 April 2026 00:46:37 +0000 (0:00:02.751) 0:00:08.817 ********** 2026-04-13 00:47:47.273185 | orchestrator | changed: [testbed-manager] 2026-04-13 00:47:47.273189 | orchestrator | 2026-04-13 00:47:47.273192 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-04-13 00:47:47.273196 | orchestrator | Monday 13 April 2026 00:46:39 +0000 (0:00:01.574) 0:00:10.392 ********** 2026-04-13 00:47:47.273200 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-04-13 00:47:47.273204 | orchestrator | ok: [testbed-manager] 2026-04-13 00:47:47.273208 | orchestrator | 2026-04-13 00:47:47.273211 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-04-13 00:47:47.273218 | orchestrator | Monday 13 April 2026 00:47:05 +0000 (0:00:26.262) 0:00:36.654 ********** 2026-04-13 00:47:47.273224 | orchestrator | changed: [testbed-manager] 2026-04-13 00:47:47.273240 | orchestrator | 2026-04-13 00:47:47.273246 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:47:47.273253 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:47:47.273260 | orchestrator | 2026-04-13 00:47:47.273263 | orchestrator | 2026-04-13 00:47:47.273267 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:47:47.273273 | orchestrator | Monday 13 April 2026 00:47:09 +0000 (0:00:04.039) 0:00:40.694 ********** 2026-04-13 00:47:47.273279 | orchestrator | =============================================================================== 2026-04-13 00:47:47.273282 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.26s 2026-04-13 00:47:47.273286 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 4.04s 2026-04-13 00:47:47.273294 | orchestrator | osism.services.homer : Create traefik external network ------------------ 3.81s 2026-04-13 00:47:47.273298 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.75s 2026-04-13 00:47:47.273302 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.77s 2026-04-13 00:47:47.273306 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.57s 2026-04-13 00:47:47.273309 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.22s 2026-04-13 00:47:47.273318 | orchestrator | 2026-04-13 00:47:47.273322 | orchestrator | 2026-04-13 00:47:47.273326 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-04-13 00:47:47.273329 | orchestrator | 2026-04-13 00:47:47.273333 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-04-13 00:47:47.273337 | orchestrator | Monday 13 April 2026 00:46:28 +0000 (0:00:00.339) 0:00:00.339 ********** 2026-04-13 00:47:47.273340 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-04-13 00:47:47.273345 | orchestrator | 2026-04-13 00:47:47.273369 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-04-13 00:47:47.273373 | orchestrator | Monday 13 April 2026 00:46:29 +0000 (0:00:00.517) 0:00:00.856 ********** 2026-04-13 00:47:47.273377 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-04-13 00:47:47.273380 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-04-13 00:47:47.273384 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-04-13 00:47:47.273388 | orchestrator | 2026-04-13 00:47:47.273394 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-04-13 00:47:47.273400 | orchestrator | Monday 13 April 2026 00:46:32 +0000 (0:00:02.907) 0:00:03.764 ********** 2026-04-13 00:47:47.273406 | orchestrator | changed: [testbed-manager] 2026-04-13 00:47:47.273411 | orchestrator | 2026-04-13 00:47:47.273418 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-04-13 00:47:47.273425 | orchestrator | Monday 13 April 2026 00:46:34 +0000 (0:00:02.517) 0:00:06.282 ********** 2026-04-13 00:47:47.273445 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-04-13 00:47:47.273453 | orchestrator | ok: [testbed-manager] 2026-04-13 00:47:47.273460 | orchestrator | 2026-04-13 00:47:47.273464 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-04-13 00:47:47.273469 | orchestrator | Monday 13 April 2026 00:47:08 +0000 (0:00:33.711) 0:00:39.994 ********** 2026-04-13 00:47:47.273473 | orchestrator | changed: [testbed-manager] 2026-04-13 00:47:47.273477 | orchestrator | 2026-04-13 00:47:47.273483 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-04-13 00:47:47.273490 | orchestrator | Monday 13 April 2026 00:47:11 +0000 (0:00:02.881) 0:00:42.876 ********** 2026-04-13 00:47:47.273497 | orchestrator | ok: [testbed-manager] 2026-04-13 00:47:47.273504 | orchestrator | 2026-04-13 00:47:47.273510 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-04-13 00:47:47.273517 | orchestrator | Monday 13 April 2026 00:47:13 +0000 (0:00:02.021) 0:00:44.897 ********** 2026-04-13 00:47:47.273524 | orchestrator | changed: [testbed-manager] 2026-04-13 00:47:47.273530 | orchestrator | 2026-04-13 00:47:47.273537 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-04-13 00:47:47.273544 | orchestrator | Monday 13 April 2026 00:47:17 +0000 (0:00:03.942) 0:00:48.840 ********** 2026-04-13 00:47:47.273551 | orchestrator | changed: [testbed-manager] 2026-04-13 00:47:47.273556 | orchestrator | 2026-04-13 00:47:47.273561 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-04-13 00:47:47.273565 | orchestrator | Monday 13 April 2026 00:47:19 +0000 (0:00:01.790) 0:00:50.630 ********** 2026-04-13 00:47:47.273569 | orchestrator | changed: [testbed-manager] 2026-04-13 00:47:47.273573 | orchestrator | 2026-04-13 00:47:47.273578 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-04-13 00:47:47.273582 | orchestrator | Monday 13 April 2026 00:47:20 +0000 (0:00:01.119) 0:00:51.749 ********** 2026-04-13 00:47:47.273586 | orchestrator | ok: [testbed-manager] 2026-04-13 00:47:47.273591 | orchestrator | 2026-04-13 00:47:47.273595 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:47:47.273599 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:47:47.273608 | orchestrator | 2026-04-13 00:47:47.273612 | orchestrator | 2026-04-13 00:47:47.273616 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:47:47.273620 | orchestrator | Monday 13 April 2026 00:47:21 +0000 (0:00:00.732) 0:00:52.482 ********** 2026-04-13 00:47:47.273624 | orchestrator | =============================================================================== 2026-04-13 00:47:47.273629 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 33.71s 2026-04-13 00:47:47.273633 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 3.94s 2026-04-13 00:47:47.273637 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.91s 2026-04-13 00:47:47.273642 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.88s 2026-04-13 00:47:47.273646 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.52s 2026-04-13 00:47:47.273650 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 2.02s 2026-04-13 00:47:47.273654 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.79s 2026-04-13 00:47:47.273659 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.12s 2026-04-13 00:47:47.273663 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.73s 2026-04-13 00:47:47.273667 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.52s 2026-04-13 00:47:47.273671 | orchestrator | 2026-04-13 00:47:47.273676 | orchestrator | 2026-04-13 00:47:47.273680 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-04-13 00:47:47.273684 | orchestrator | 2026-04-13 00:47:47.273688 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-13 00:47:47.273692 | orchestrator | Monday 13 April 2026 00:46:22 +0000 (0:00:00.338) 0:00:00.338 ********** 2026-04-13 00:47:47.273697 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:47:47.273701 | orchestrator | 2026-04-13 00:47:47.273706 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-04-13 00:47:47.273710 | orchestrator | Monday 13 April 2026 00:46:24 +0000 (0:00:01.217) 0:00:01.556 ********** 2026-04-13 00:47:47.273714 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-13 00:47:47.273718 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-13 00:47:47.273723 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-13 00:47:47.273727 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-13 00:47:47.273731 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-13 00:47:47.273735 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-13 00:47:47.273739 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-13 00:47:47.273744 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-13 00:47:47.273748 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-13 00:47:47.273753 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-13 00:47:47.273757 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-13 00:47:47.273765 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-13 00:47:47.273769 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-13 00:47:47.273773 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-13 00:47:47.273778 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-13 00:47:47.273787 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-13 00:47:47.273791 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-13 00:47:47.273795 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-13 00:47:47.273799 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-13 00:47:47.273804 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-13 00:47:47.273808 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-13 00:47:47.273812 | orchestrator | 2026-04-13 00:47:47.273816 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-13 00:47:47.273820 | orchestrator | Monday 13 April 2026 00:46:28 +0000 (0:00:04.323) 0:00:05.879 ********** 2026-04-13 00:47:47.273824 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:47:47.273829 | orchestrator | 2026-04-13 00:47:47.273832 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-04-13 00:47:47.273858 | orchestrator | Monday 13 April 2026 00:46:29 +0000 (0:00:01.470) 0:00:07.349 ********** 2026-04-13 00:47:47.273864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:47:47.273874 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:47:47.273878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:47:47.273882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:47:47.273889 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:47:47.273897 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:47:47.273901 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:47:47.273905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.273909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.273915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.273919 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.273929 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.273933 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.273937 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.273944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.273950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.273956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.273960 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.273964 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.273977 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.273982 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.273986 | orchestrator | 2026-04-13 00:47:47.273989 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-04-13 00:47:47.273993 | orchestrator | Monday 13 April 2026 00:46:35 +0000 (0:00:05.285) 0:00:12.634 ********** 2026-04-13 00:47:47.273997 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-13 00:47:47.274001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-13 00:47:47.274005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-13 00:47:47.274079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.274086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-13 00:47:47.274097 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.274102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.274106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.274110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.274114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.274120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.274125 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.274133 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:47:47.274141 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-13 00:47:47.274145 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:47:47.274149 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:47:47.274153 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:47:47.274167 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.274171 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-13 00:47:47.274175 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.274179 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.274183 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:47:47.274187 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.274190 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:47:47.274197 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-13 00:47:47.274204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.274208 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.274214 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:47:47.274218 | orchestrator | 2026-04-13 00:47:47.274222 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-04-13 00:47:47.274226 | orchestrator | Monday 13 April 2026 00:46:39 +0000 (0:00:04.882) 0:00:17.516 ********** 2026-04-13 00:47:47.274230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-13 00:47:47.274234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.274238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.274242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-13 00:47:47.274249 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-13 00:47:47.274253 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.274259 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-13 00:47:47.274266 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.274270 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:47:47.274274 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:47:47.274278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.274282 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.274286 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.274293 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:47:47.274299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-13 00:47:47.274303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.274307 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:47:47.274311 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-13 00:47:47.274322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.274327 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-13 00:47:47.274331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.274334 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:47:47.274338 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.274369 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.274374 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.274378 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:47:47.274382 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.274385 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:47:47.274389 | orchestrator | 2026-04-13 00:47:47.274393 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-04-13 00:47:47.274397 | orchestrator | Monday 13 April 2026 00:46:46 +0000 (0:00:06.446) 0:00:23.963 ********** 2026-04-13 00:47:47.274400 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:47:47.274404 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:47:47.274408 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:47:47.274411 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:47:47.274415 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:47:47.274419 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:47:47.274422 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:47:47.274426 | orchestrator | 2026-04-13 00:47:47.274432 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-04-13 00:47:47.274436 | orchestrator | Monday 13 April 2026 00:46:47 +0000 (0:00:01.112) 0:00:25.076 ********** 2026-04-13 00:47:47.274440 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:47:47.274444 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:47:47.274447 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:47:47.274451 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:47:47.274455 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:47:47.274459 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:47:47.274462 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:47:47.274466 | orchestrator | 2026-04-13 00:47:47.274470 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-04-13 00:47:47.274473 | orchestrator | Monday 13 April 2026 00:46:48 +0000 (0:00:00.755) 0:00:25.832 ********** 2026-04-13 00:47:47.274477 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:47:47.274481 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:47:47.274484 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:47:47.274488 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:47:47.274492 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:47:47.274495 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:47:47.274505 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:47:47.274509 | orchestrator | 2026-04-13 00:47:47.274513 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-04-13 00:47:47.274516 | orchestrator | Monday 13 April 2026 00:46:49 +0000 (0:00:01.247) 0:00:27.080 ********** 2026-04-13 00:47:47.274520 | orchestrator | changed: [testbed-manager] 2026-04-13 00:47:47.274524 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:47:47.274528 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:47:47.274531 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:47:47.274535 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:47:47.274538 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:47:47.274542 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:47:47.274546 | orchestrator | 2026-04-13 00:47:47.274549 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-04-13 00:47:47.274553 | orchestrator | Monday 13 April 2026 00:46:52 +0000 (0:00:02.697) 0:00:29.778 ********** 2026-04-13 00:47:47.274557 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:47:47.274561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:47:47.274568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:47:47.274572 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.274579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.274587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:47:47.274591 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:47:47.274595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.274598 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.274607 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:47:47.274610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.274614 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.274624 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:47:47.274632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.274639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.274644 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.274651 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.274659 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.274663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.274667 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.274677 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.274681 | orchestrator | 2026-04-13 00:47:47.274685 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-04-13 00:47:47.274689 | orchestrator | Monday 13 April 2026 00:46:58 +0000 (0:00:06.398) 0:00:36.176 ********** 2026-04-13 00:47:47.274694 | orchestrator | [WARNING]: Skipped 2026-04-13 00:47:47.274701 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-04-13 00:47:47.274706 | orchestrator | to this access issue: 2026-04-13 00:47:47.274712 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-04-13 00:47:47.274718 | orchestrator | directory 2026-04-13 00:47:47.274724 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-13 00:47:47.274729 | orchestrator | 2026-04-13 00:47:47.274735 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-04-13 00:47:47.274741 | orchestrator | Monday 13 April 2026 00:46:59 +0000 (0:00:01.136) 0:00:37.313 ********** 2026-04-13 00:47:47.274746 | orchestrator | [WARNING]: Skipped 2026-04-13 00:47:47.274752 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-04-13 00:47:47.274758 | orchestrator | to this access issue: 2026-04-13 00:47:47.274764 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-04-13 00:47:47.274770 | orchestrator | directory 2026-04-13 00:47:47.274776 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-13 00:47:47.274782 | orchestrator | 2026-04-13 00:47:47.274785 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-04-13 00:47:47.274789 | orchestrator | Monday 13 April 2026 00:47:01 +0000 (0:00:01.611) 0:00:38.924 ********** 2026-04-13 00:47:47.274793 | orchestrator | [WARNING]: Skipped 2026-04-13 00:47:47.274796 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-04-13 00:47:47.274800 | orchestrator | to this access issue: 2026-04-13 00:47:47.274804 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-04-13 00:47:47.274808 | orchestrator | directory 2026-04-13 00:47:47.274811 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-13 00:47:47.274816 | orchestrator | 2026-04-13 00:47:47.274823 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-04-13 00:47:47.274827 | orchestrator | Monday 13 April 2026 00:47:02 +0000 (0:00:01.393) 0:00:40.318 ********** 2026-04-13 00:47:47.274830 | orchestrator | [WARNING]: Skipped 2026-04-13 00:47:47.274834 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-04-13 00:47:47.274838 | orchestrator | to this access issue: 2026-04-13 00:47:47.274842 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-04-13 00:47:47.274845 | orchestrator | directory 2026-04-13 00:47:47.274849 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-13 00:47:47.274853 | orchestrator | 2026-04-13 00:47:47.274856 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-04-13 00:47:47.274860 | orchestrator | Monday 13 April 2026 00:47:04 +0000 (0:00:01.454) 0:00:41.772 ********** 2026-04-13 00:47:47.274867 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:47:47.274871 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:47:47.274875 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:47:47.274882 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:47:47.274886 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:47:47.274890 | orchestrator | changed: [testbed-manager] 2026-04-13 00:47:47.274893 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:47:47.274897 | orchestrator | 2026-04-13 00:47:47.274901 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-04-13 00:47:47.274905 | orchestrator | Monday 13 April 2026 00:47:14 +0000 (0:00:10.431) 0:00:52.203 ********** 2026-04-13 00:47:47.274909 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-13 00:47:47.274913 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-13 00:47:47.274916 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-13 00:47:47.274920 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-13 00:47:47.274924 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-13 00:47:47.274927 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-13 00:47:47.274931 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-13 00:47:47.274935 | orchestrator | 2026-04-13 00:47:47.274938 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-04-13 00:47:47.274942 | orchestrator | Monday 13 April 2026 00:47:19 +0000 (0:00:05.137) 0:00:57.341 ********** 2026-04-13 00:47:47.274946 | orchestrator | changed: [testbed-manager] 2026-04-13 00:47:47.274950 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:47:47.274953 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:47:47.274957 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:47:47.274961 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:47:47.274964 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:47:47.274971 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:47:47.274975 | orchestrator | 2026-04-13 00:47:47.274979 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-04-13 00:47:47.274983 | orchestrator | Monday 13 April 2026 00:47:23 +0000 (0:00:04.086) 0:01:01.428 ********** 2026-04-13 00:47:47.274986 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:47:47.274991 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.274995 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:47:47.275003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.275007 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release2026-04-13 00:47:47 | INFO  | Task 5d402ac7-3b85-470a-a2ca-9220ac0011ed is in state SUCCESS 2026-04-13 00:47:47.275015 | orchestrator | /2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.275019 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.275023 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:47:47.275030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.275034 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:47:47.275040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.275047 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.275054 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.275058 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:47:47.275062 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.275073 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:47:47.275077 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.275113 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:47:47.275122 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.275126 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.275133 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.275137 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.275140 | orchestrator | 2026-04-13 00:47:47.275144 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-04-13 00:47:47.275148 | orchestrator | Monday 13 April 2026 00:47:27 +0000 (0:00:03.464) 0:01:04.892 ********** 2026-04-13 00:47:47.275152 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-13 00:47:47.275156 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-13 00:47:47.275160 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-13 00:47:47.275163 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-13 00:47:47.275167 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-13 00:47:47.275174 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-13 00:47:47.275178 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-13 00:47:47.275181 | orchestrator | 2026-04-13 00:47:47.275185 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-04-13 00:47:47.275189 | orchestrator | Monday 13 April 2026 00:47:30 +0000 (0:00:03.050) 0:01:07.942 ********** 2026-04-13 00:47:47.275193 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-13 00:47:47.275197 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-13 00:47:47.275200 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-13 00:47:47.275204 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-13 00:47:47.275213 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-13 00:47:47.275216 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-13 00:47:47.275220 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-13 00:47:47.275224 | orchestrator | 2026-04-13 00:47:47.275228 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-04-13 00:47:47.275231 | orchestrator | Monday 13 April 2026 00:47:32 +0000 (0:00:02.540) 0:01:10.482 ********** 2026-04-13 00:47:47.275235 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:47:47.275239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:47:47.275246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:47:47.275250 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:47:47.275254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:47:47.275261 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.275268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.275272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.275276 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.275284 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.275289 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:47:47.275293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.275300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.275308 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-13 00:47:47.275312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.275316 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.275320 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.275326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.275330 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.275334 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.275340 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:47:47.275368 | orchestrator | 2026-04-13 00:47:47.275373 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-04-13 00:47:47.275377 | orchestrator | Monday 13 April 2026 00:47:36 +0000 (0:00:03.955) 0:01:14.438 ********** 2026-04-13 00:47:47.275382 | orchestrator | changed: [testbed-manager] => { 2026-04-13 00:47:47.275389 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:47:47.275393 | orchestrator | } 2026-04-13 00:47:47.275397 | orchestrator | changed: [testbed-node-0] => { 2026-04-13 00:47:47.275401 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:47:47.275405 | orchestrator | } 2026-04-13 00:47:47.275409 | orchestrator | changed: [testbed-node-1] => { 2026-04-13 00:47:47.275412 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:47:47.275416 | orchestrator | } 2026-04-13 00:47:47.275420 | orchestrator | changed: [testbed-node-2] => { 2026-04-13 00:47:47.275423 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:47:47.275427 | orchestrator | } 2026-04-13 00:47:47.275431 | orchestrator | changed: [testbed-node-3] => { 2026-04-13 00:47:47.275434 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:47:47.275438 | orchestrator | } 2026-04-13 00:47:47.275442 | orchestrator | changed: [testbed-node-4] => { 2026-04-13 00:47:47.275445 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:47:47.275449 | orchestrator | } 2026-04-13 00:47:47.275453 | orchestrator | changed: [testbed-node-5] => { 2026-04-13 00:47:47.275457 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:47:47.275460 | orchestrator | } 2026-04-13 00:47:47.275464 | orchestrator | 2026-04-13 00:47:47.275468 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-13 00:47:47.275471 | orchestrator | Monday 13 April 2026 00:47:37 +0000 (0:00:01.049) 0:01:15.488 ********** 2026-04-13 00:47:47.275475 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-13 00:47:47.275479 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.275486 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.275490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-13 00:47:47.275498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.275506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.275510 | orchestrator | skipping: [testbed-manager] 2026-04-13 00:47:47.275514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-13 00:47:47.275518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.275521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.275528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-13 00:47:47.275532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.275549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.275553 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:47:47.275557 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:47:47.275564 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-13 00:47:47.275568 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.275572 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.275576 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:47:47.275579 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:47:47.275583 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-13 00:47:47.275587 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.275594 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.275601 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:47:47.275605 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-13 00:47:47.275611 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.275615 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:47:47.275619 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:47:47.275623 | orchestrator | 2026-04-13 00:47:47.275627 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-04-13 00:47:47.275630 | orchestrator | Monday 13 April 2026 00:47:39 +0000 (0:00:01.915) 0:01:17.403 ********** 2026-04-13 00:47:47.275634 | orchestrator | changed: [testbed-manager] 2026-04-13 00:47:47.275638 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:47:47.275642 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:47:47.275645 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:47:47.275649 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:47:47.275653 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:47:47.275656 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:47:47.275660 | orchestrator | 2026-04-13 00:47:47.275664 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-04-13 00:47:47.275668 | orchestrator | Monday 13 April 2026 00:47:41 +0000 (0:00:01.674) 0:01:19.077 ********** 2026-04-13 00:47:47.275672 | orchestrator | changed: [testbed-manager] 2026-04-13 00:47:47.275675 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:47:47.275679 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:47:47.275683 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:47:47.275686 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:47:47.275690 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:47:47.275694 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:47:47.275698 | orchestrator | 2026-04-13 00:47:47.275701 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-13 00:47:47.275705 | orchestrator | Monday 13 April 2026 00:47:42 +0000 (0:00:01.362) 0:01:20.440 ********** 2026-04-13 00:47:47.275709 | orchestrator | 2026-04-13 00:47:47.275713 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-13 00:47:47.275716 | orchestrator | Monday 13 April 2026 00:47:42 +0000 (0:00:00.088) 0:01:20.528 ********** 2026-04-13 00:47:47.275723 | orchestrator | 2026-04-13 00:47:47.275727 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-13 00:47:47.275731 | orchestrator | Monday 13 April 2026 00:47:43 +0000 (0:00:00.088) 0:01:20.617 ********** 2026-04-13 00:47:47.275734 | orchestrator | 2026-04-13 00:47:47.275738 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-13 00:47:47.275742 | orchestrator | Monday 13 April 2026 00:47:43 +0000 (0:00:00.064) 0:01:20.682 ********** 2026-04-13 00:47:47.275745 | orchestrator | 2026-04-13 00:47:47.275749 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-13 00:47:47.275753 | orchestrator | Monday 13 April 2026 00:47:43 +0000 (0:00:00.074) 0:01:20.756 ********** 2026-04-13 00:47:47.275756 | orchestrator | 2026-04-13 00:47:47.275760 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-13 00:47:47.275764 | orchestrator | Monday 13 April 2026 00:47:43 +0000 (0:00:00.067) 0:01:20.824 ********** 2026-04-13 00:47:47.275767 | orchestrator | 2026-04-13 00:47:47.275771 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-13 00:47:47.275775 | orchestrator | Monday 13 April 2026 00:47:43 +0000 (0:00:00.066) 0:01:20.891 ********** 2026-04-13 00:47:47.275779 | orchestrator | 2026-04-13 00:47:47.275783 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-04-13 00:47:47.275787 | orchestrator | Monday 13 April 2026 00:47:43 +0000 (0:00:00.088) 0:01:20.980 ********** 2026-04-13 00:47:47.275802 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_uvychljy/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_uvychljy/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_uvychljy/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_uvychljy/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd: Internal Server Error (\"unknown: repository kolla/release/2024.2/fluentd not found\")\\n'"} 2026-04-13 00:47:47.275810 | orchestrator | fatal: [testbed-manager]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_jsyxnhw8/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_jsyxnhw8/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_jsyxnhw8/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_jsyxnhw8/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd: Internal Server Error (\"unknown: repository kolla/release/2024.2/fluentd not found\")\\n'"} 2026-04-13 00:47:47.275824 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_xwysiwtf/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_xwysiwtf/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_xwysiwtf/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_xwysiwtf/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd: Internal Server Error (\"unknown: repository kolla/release/2024.2/fluentd not found\")\\n'"} 2026-04-13 00:47:47.275844 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_he_cj0oi/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_he_cj0oi/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_he_cj0oi/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_he_cj0oi/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd: Internal Server Error (\"unknown: repository kolla/release/2024.2/fluentd not found\")\\n'"} 2026-04-13 00:47:47.275852 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_yvz8qsqp/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_yvz8qsqp/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_yvz8qsqp/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_yvz8qsqp/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd: Internal Server Error (\"unknown: repository kolla/release/2024.2/fluentd not found\")\\n'"} 2026-04-13 00:47:47.275863 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_petpglq7/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_petpglq7/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_petpglq7/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_petpglq7/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd: Internal Server Error (\"unknown: repository kolla/release/2024.2/fluentd not found\")\\n'"} 2026-04-13 00:47:47.275873 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_ke1gmbpx/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_ke1gmbpx/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_ke1gmbpx/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_ke1gmbpx/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd: Internal Server Error (\"unknown: repository kolla/release/2024.2/fluentd not found\")\\n'"} 2026-04-13 00:47:47.275878 | orchestrator | 2026-04-13 00:47:47.275882 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:47:47.275886 | orchestrator | testbed-manager : ok=20  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-13 00:47:47.275893 | orchestrator | testbed-node-0 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-13 00:47:47.275897 | orchestrator | testbed-node-1 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-13 00:47:47.275901 | orchestrator | testbed-node-2 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-13 00:47:47.275905 | orchestrator | testbed-node-3 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-13 00:47:47.275909 | orchestrator | testbed-node-4 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-13 00:47:47.275915 | orchestrator | testbed-node-5 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-13 00:47:47.275919 | orchestrator | 2026-04-13 00:47:47.275923 | orchestrator | 2026-04-13 00:47:47.275927 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:47:47.275930 | orchestrator | Monday 13 April 2026 00:47:46 +0000 (0:00:03.247) 0:01:24.227 ********** 2026-04-13 00:47:47.275934 | orchestrator | =============================================================================== 2026-04-13 00:47:47.275938 | orchestrator | common : Copying over fluentd.conf ------------------------------------- 10.43s 2026-04-13 00:47:47.275942 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 6.45s 2026-04-13 00:47:47.275945 | orchestrator | common : Copying over config.json files for services -------------------- 6.40s 2026-04-13 00:47:47.275949 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.29s 2026-04-13 00:47:47.275953 | orchestrator | common : Copying over cron logrotate config file ------------------------ 5.14s 2026-04-13 00:47:47.275956 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 4.88s 2026-04-13 00:47:47.275960 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.32s 2026-04-13 00:47:47.275964 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 4.09s 2026-04-13 00:47:47.275967 | orchestrator | service-check-containers : common | Check containers -------------------- 3.96s 2026-04-13 00:47:47.275971 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.46s 2026-04-13 00:47:47.275975 | orchestrator | common : Restart fluentd container -------------------------------------- 3.25s 2026-04-13 00:47:47.275978 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.05s 2026-04-13 00:47:47.275982 | orchestrator | common : Copying over kolla.target -------------------------------------- 2.70s 2026-04-13 00:47:47.275986 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.54s 2026-04-13 00:47:47.275990 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.92s 2026-04-13 00:47:47.275993 | orchestrator | common : Creating log volume -------------------------------------------- 1.67s 2026-04-13 00:47:47.276000 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.61s 2026-04-13 00:47:47.276003 | orchestrator | common : include_tasks -------------------------------------------------- 1.47s 2026-04-13 00:47:47.276007 | orchestrator | common : Find custom fluentd output config files ------------------------ 1.45s 2026-04-13 00:47:47.276011 | orchestrator | common : Find custom fluentd format config files ------------------------ 1.39s 2026-04-13 00:47:47.288583 | orchestrator | 2026-04-13 00:47:47 | INFO  | Task 3feb1f0c-d775-469a-9d20-f2683021cf3c is in state STARTED 2026-04-13 00:47:47.291494 | orchestrator | 2026-04-13 00:47:47 | INFO  | Task 2692e777-ad58-497d-ba50-fb7ac91b8c2b is in state STARTED 2026-04-13 00:47:47.292746 | orchestrator | 2026-04-13 00:47:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:50.351583 | orchestrator | 2026-04-13 00:47:50 | INFO  | Task ea4c60b7-16be-4532-9e96-e07dbc9e7bd7 is in state STARTED 2026-04-13 00:47:50.353625 | orchestrator | 2026-04-13 00:47:50 | INFO  | Task a0047f33-c077-44e3-9c04-ad337ec6db8e is in state STARTED 2026-04-13 00:47:50.355755 | orchestrator | 2026-04-13 00:47:50 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:47:50.358001 | orchestrator | 2026-04-13 00:47:50 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:47:50.359600 | orchestrator | 2026-04-13 00:47:50 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:47:50.362189 | orchestrator | 2026-04-13 00:47:50 | INFO  | Task 529fc635-f8e2-4f63-96cd-fc00c4021273 is in state STARTED 2026-04-13 00:47:50.363142 | orchestrator | 2026-04-13 00:47:50 | INFO  | Task 3feb1f0c-d775-469a-9d20-f2683021cf3c is in state SUCCESS 2026-04-13 00:47:50.363808 | orchestrator | 2026-04-13 00:47:50.363841 | orchestrator | 2026-04-13 00:47:50.363851 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 00:47:50.363861 | orchestrator | 2026-04-13 00:47:50.363870 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 00:47:50.363880 | orchestrator | Monday 13 April 2026 00:46:29 +0000 (0:00:00.630) 0:00:00.630 ********** 2026-04-13 00:47:50.363890 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-04-13 00:47:50.363899 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-04-13 00:47:50.363909 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-04-13 00:47:50.363918 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-04-13 00:47:50.363926 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-04-13 00:47:50.363934 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-04-13 00:47:50.363943 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-04-13 00:47:50.363951 | orchestrator | 2026-04-13 00:47:50.363960 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-04-13 00:47:50.363969 | orchestrator | 2026-04-13 00:47:50.363977 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-04-13 00:47:50.363985 | orchestrator | Monday 13 April 2026 00:46:30 +0000 (0:00:01.162) 0:00:01.793 ********** 2026-04-13 00:47:50.363996 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:47:50.364008 | orchestrator | 2026-04-13 00:47:50.364017 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-04-13 00:47:50.364026 | orchestrator | Monday 13 April 2026 00:46:31 +0000 (0:00:01.413) 0:00:03.206 ********** 2026-04-13 00:47:50.364035 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:47:50.364045 | orchestrator | ok: [testbed-manager] 2026-04-13 00:47:50.364053 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:47:50.364062 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:47:50.364067 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:47:50.364072 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:47:50.364077 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:47:50.364082 | orchestrator | 2026-04-13 00:47:50.364087 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-04-13 00:47:50.364093 | orchestrator | Monday 13 April 2026 00:46:34 +0000 (0:00:03.377) 0:00:06.584 ********** 2026-04-13 00:47:50.364098 | orchestrator | ok: [testbed-manager] 2026-04-13 00:47:50.364103 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:47:50.364108 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:47:50.364113 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:47:50.364118 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:47:50.364123 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:47:50.364128 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:47:50.364133 | orchestrator | 2026-04-13 00:47:50.364139 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-04-13 00:47:50.364144 | orchestrator | Monday 13 April 2026 00:46:39 +0000 (0:00:04.267) 0:00:10.851 ********** 2026-04-13 00:47:50.364149 | orchestrator | changed: [testbed-manager] 2026-04-13 00:47:50.364154 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:47:50.364159 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:47:50.364165 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:47:50.364173 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:47:50.364181 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:47:50.364190 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:47:50.364197 | orchestrator | 2026-04-13 00:47:50.364226 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-04-13 00:47:50.364235 | orchestrator | Monday 13 April 2026 00:46:41 +0000 (0:00:02.363) 0:00:13.215 ********** 2026-04-13 00:47:50.364245 | orchestrator | changed: [testbed-manager] 2026-04-13 00:47:50.364262 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:47:50.364267 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:47:50.364272 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:47:50.364277 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:47:50.364282 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:47:50.364287 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:47:50.364292 | orchestrator | 2026-04-13 00:47:50.364297 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-04-13 00:47:50.364302 | orchestrator | Monday 13 April 2026 00:46:51 +0000 (0:00:09.781) 0:00:22.996 ********** 2026-04-13 00:47:50.364307 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:47:50.364312 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:47:50.364317 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:47:50.364322 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:47:50.364327 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:47:50.364332 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:47:50.364337 | orchestrator | changed: [testbed-manager] 2026-04-13 00:47:50.364342 | orchestrator | 2026-04-13 00:47:50.364386 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-04-13 00:47:50.364392 | orchestrator | Monday 13 April 2026 00:47:19 +0000 (0:00:27.608) 0:00:50.604 ********** 2026-04-13 00:47:50.364398 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:47:50.364405 | orchestrator | 2026-04-13 00:47:50.364410 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-04-13 00:47:50.364415 | orchestrator | Monday 13 April 2026 00:47:21 +0000 (0:00:02.625) 0:00:53.230 ********** 2026-04-13 00:47:50.364420 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-04-13 00:47:50.364427 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-04-13 00:47:50.364433 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-04-13 00:47:50.364439 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-04-13 00:47:50.364455 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-04-13 00:47:50.364461 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-04-13 00:47:50.364467 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-04-13 00:47:50.364473 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-04-13 00:47:50.364479 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-04-13 00:47:50.364485 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-04-13 00:47:50.364491 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-04-13 00:47:50.364498 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-04-13 00:47:50.364507 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-04-13 00:47:50.364515 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-04-13 00:47:50.364525 | orchestrator | 2026-04-13 00:47:50.364534 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-04-13 00:47:50.364545 | orchestrator | Monday 13 April 2026 00:47:27 +0000 (0:00:05.610) 0:00:58.840 ********** 2026-04-13 00:47:50.364553 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:47:50.364559 | orchestrator | ok: [testbed-manager] 2026-04-13 00:47:50.364565 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:47:50.364571 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:47:50.364576 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:47:50.364581 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:47:50.364586 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:47:50.364597 | orchestrator | 2026-04-13 00:47:50.364602 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-04-13 00:47:50.364607 | orchestrator | Monday 13 April 2026 00:47:28 +0000 (0:00:01.563) 0:01:00.404 ********** 2026-04-13 00:47:50.364612 | orchestrator | changed: [testbed-manager] 2026-04-13 00:47:50.364618 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:47:50.364626 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:47:50.364634 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:47:50.364643 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:47:50.364650 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:47:50.364658 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:47:50.364667 | orchestrator | 2026-04-13 00:47:50.364676 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-04-13 00:47:50.364683 | orchestrator | Monday 13 April 2026 00:47:30 +0000 (0:00:01.441) 0:01:01.845 ********** 2026-04-13 00:47:50.364691 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:47:50.364700 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:47:50.364708 | orchestrator | ok: [testbed-manager] 2026-04-13 00:47:50.364716 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:47:50.364724 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:47:50.364729 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:47:50.364734 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:47:50.364739 | orchestrator | 2026-04-13 00:47:50.364745 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-04-13 00:47:50.364750 | orchestrator | Monday 13 April 2026 00:47:32 +0000 (0:00:01.775) 0:01:03.620 ********** 2026-04-13 00:47:50.364755 | orchestrator | ok: [testbed-manager] 2026-04-13 00:47:50.364760 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:47:50.364765 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:47:50.364770 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:47:50.364774 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:47:50.364779 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:47:50.364784 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:47:50.364789 | orchestrator | 2026-04-13 00:47:50.364794 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-04-13 00:47:50.364799 | orchestrator | Monday 13 April 2026 00:47:33 +0000 (0:00:01.404) 0:01:05.025 ********** 2026-04-13 00:47:50.364805 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-04-13 00:47:50.364817 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:47:50.364822 | orchestrator | 2026-04-13 00:47:50.364827 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-04-13 00:47:50.364832 | orchestrator | Monday 13 April 2026 00:47:35 +0000 (0:00:01.756) 0:01:06.781 ********** 2026-04-13 00:47:50.364841 | orchestrator | changed: [testbed-manager] 2026-04-13 00:47:50.364849 | orchestrator | 2026-04-13 00:47:50.364857 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-04-13 00:47:50.364866 | orchestrator | Monday 13 April 2026 00:47:36 +0000 (0:00:01.643) 0:01:08.424 ********** 2026-04-13 00:47:50.364874 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:47:50.364884 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:47:50.364892 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:47:50.364900 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:47:50.364908 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:47:50.364917 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:47:50.364924 | orchestrator | changed: [testbed-manager] 2026-04-13 00:47:50.364929 | orchestrator | 2026-04-13 00:47:50.364934 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:47:50.364939 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:47:50.364953 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:47:50.364961 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:47:50.364970 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:47:50.364985 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:47:50.364994 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:47:50.365002 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:47:50.365011 | orchestrator | 2026-04-13 00:47:50.365019 | orchestrator | 2026-04-13 00:47:50.365028 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:47:50.365037 | orchestrator | Monday 13 April 2026 00:47:48 +0000 (0:00:11.575) 0:01:20.000 ********** 2026-04-13 00:47:50.365045 | orchestrator | =============================================================================== 2026-04-13 00:47:50.365053 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 27.61s 2026-04-13 00:47:50.365062 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.58s 2026-04-13 00:47:50.365071 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.78s 2026-04-13 00:47:50.365078 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.61s 2026-04-13 00:47:50.365087 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.27s 2026-04-13 00:47:50.365096 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 3.38s 2026-04-13 00:47:50.365103 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 2.63s 2026-04-13 00:47:50.365112 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.36s 2026-04-13 00:47:50.365120 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.78s 2026-04-13 00:47:50.365128 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.76s 2026-04-13 00:47:50.365136 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.64s 2026-04-13 00:47:50.365145 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.56s 2026-04-13 00:47:50.365153 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.44s 2026-04-13 00:47:50.365161 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.41s 2026-04-13 00:47:50.365170 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.40s 2026-04-13 00:47:50.365178 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.16s 2026-04-13 00:47:50.365185 | orchestrator | 2026-04-13 00:47:50 | INFO  | Task 2692e777-ad58-497d-ba50-fb7ac91b8c2b is in state STARTED 2026-04-13 00:47:50.365193 | orchestrator | 2026-04-13 00:47:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:53.423706 | orchestrator | 2026-04-13 00:47:53 | INFO  | Task ea4c60b7-16be-4532-9e96-e07dbc9e7bd7 is in state STARTED 2026-04-13 00:47:53.426870 | orchestrator | 2026-04-13 00:47:53 | INFO  | Task a0047f33-c077-44e3-9c04-ad337ec6db8e is in state STARTED 2026-04-13 00:47:53.431208 | orchestrator | 2026-04-13 00:47:53 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:47:53.435122 | orchestrator | 2026-04-13 00:47:53 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:47:53.436392 | orchestrator | 2026-04-13 00:47:53 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:47:53.437122 | orchestrator | 2026-04-13 00:47:53 | INFO  | Task 529fc635-f8e2-4f63-96cd-fc00c4021273 is in state STARTED 2026-04-13 00:47:53.437811 | orchestrator | 2026-04-13 00:47:53 | INFO  | Task 2692e777-ad58-497d-ba50-fb7ac91b8c2b is in state SUCCESS 2026-04-13 00:47:53.438158 | orchestrator | 2026-04-13 00:47:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:56.491986 | orchestrator | 2026-04-13 00:47:56 | INFO  | Task ea4c60b7-16be-4532-9e96-e07dbc9e7bd7 is in state STARTED 2026-04-13 00:47:56.492376 | orchestrator | 2026-04-13 00:47:56 | INFO  | Task a0047f33-c077-44e3-9c04-ad337ec6db8e is in state STARTED 2026-04-13 00:47:56.493421 | orchestrator | 2026-04-13 00:47:56 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:47:56.494501 | orchestrator | 2026-04-13 00:47:56 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:47:56.495592 | orchestrator | 2026-04-13 00:47:56 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:47:56.496674 | orchestrator | 2026-04-13 00:47:56 | INFO  | Task 529fc635-f8e2-4f63-96cd-fc00c4021273 is in state STARTED 2026-04-13 00:47:56.496720 | orchestrator | 2026-04-13 00:47:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:47:59.538546 | orchestrator | 2026-04-13 00:47:59 | INFO  | Task ea4c60b7-16be-4532-9e96-e07dbc9e7bd7 is in state STARTED 2026-04-13 00:47:59.539228 | orchestrator | 2026-04-13 00:47:59 | INFO  | Task a0047f33-c077-44e3-9c04-ad337ec6db8e is in state STARTED 2026-04-13 00:47:59.540522 | orchestrator | 2026-04-13 00:47:59 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:47:59.542657 | orchestrator | 2026-04-13 00:47:59 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:47:59.543735 | orchestrator | 2026-04-13 00:47:59 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:47:59.544615 | orchestrator | 2026-04-13 00:47:59 | INFO  | Task 529fc635-f8e2-4f63-96cd-fc00c4021273 is in state STARTED 2026-04-13 00:47:59.544654 | orchestrator | 2026-04-13 00:47:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:02.782925 | orchestrator | 2026-04-13 00:48:02 | INFO  | Task ea4c60b7-16be-4532-9e96-e07dbc9e7bd7 is in state STARTED 2026-04-13 00:48:02.783173 | orchestrator | 2026-04-13 00:48:02 | INFO  | Task a0047f33-c077-44e3-9c04-ad337ec6db8e is in state STARTED 2026-04-13 00:48:02.785947 | orchestrator | 2026-04-13 00:48:02 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:48:02.787287 | orchestrator | 2026-04-13 00:48:02 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:48:02.793201 | orchestrator | 2026-04-13 00:48:02 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:48:02.793696 | orchestrator | 2026-04-13 00:48:02 | INFO  | Task 529fc635-f8e2-4f63-96cd-fc00c4021273 is in state STARTED 2026-04-13 00:48:02.793747 | orchestrator | 2026-04-13 00:48:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:05.825717 | orchestrator | 2026-04-13 00:48:05 | INFO  | Task ea4c60b7-16be-4532-9e96-e07dbc9e7bd7 is in state STARTED 2026-04-13 00:48:05.825860 | orchestrator | 2026-04-13 00:48:05 | INFO  | Task a0047f33-c077-44e3-9c04-ad337ec6db8e is in state STARTED 2026-04-13 00:48:05.826614 | orchestrator | 2026-04-13 00:48:05 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:48:05.827408 | orchestrator | 2026-04-13 00:48:05 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:48:05.828166 | orchestrator | 2026-04-13 00:48:05 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:48:05.828959 | orchestrator | 2026-04-13 00:48:05 | INFO  | Task 529fc635-f8e2-4f63-96cd-fc00c4021273 is in state STARTED 2026-04-13 00:48:05.828987 | orchestrator | 2026-04-13 00:48:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:09.021534 | orchestrator | 2026-04-13 00:48:09 | INFO  | Task ea4c60b7-16be-4532-9e96-e07dbc9e7bd7 is in state STARTED 2026-04-13 00:48:09.022305 | orchestrator | 2026-04-13 00:48:09 | INFO  | Task a0047f33-c077-44e3-9c04-ad337ec6db8e is in state STARTED 2026-04-13 00:48:09.023815 | orchestrator | 2026-04-13 00:48:09 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:48:09.024708 | orchestrator | 2026-04-13 00:48:09 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:48:09.025461 | orchestrator | 2026-04-13 00:48:09 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:48:09.026485 | orchestrator | 2026-04-13 00:48:09 | INFO  | Task 529fc635-f8e2-4f63-96cd-fc00c4021273 is in state SUCCESS 2026-04-13 00:48:09.027759 | orchestrator | 2026-04-13 00:48:09.027804 | orchestrator | 2026-04-13 00:48:09.027813 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-04-13 00:48:09.027821 | orchestrator | 2026-04-13 00:48:09.027828 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-04-13 00:48:09.027835 | orchestrator | Monday 13 April 2026 00:46:49 +0000 (0:00:00.317) 0:00:00.317 ********** 2026-04-13 00:48:09.027842 | orchestrator | ok: [testbed-manager] 2026-04-13 00:48:09.027850 | orchestrator | 2026-04-13 00:48:09.027857 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-04-13 00:48:09.027864 | orchestrator | Monday 13 April 2026 00:46:51 +0000 (0:00:01.409) 0:00:01.727 ********** 2026-04-13 00:48:09.027872 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-04-13 00:48:09.027878 | orchestrator | 2026-04-13 00:48:09.027885 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-04-13 00:48:09.027892 | orchestrator | Monday 13 April 2026 00:46:51 +0000 (0:00:00.649) 0:00:02.376 ********** 2026-04-13 00:48:09.027898 | orchestrator | changed: [testbed-manager] 2026-04-13 00:48:09.027905 | orchestrator | 2026-04-13 00:48:09.027912 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-04-13 00:48:09.027918 | orchestrator | Monday 13 April 2026 00:46:52 +0000 (0:00:01.235) 0:00:03.612 ********** 2026-04-13 00:48:09.027925 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-04-13 00:48:09.027932 | orchestrator | ok: [testbed-manager] 2026-04-13 00:48:09.027938 | orchestrator | 2026-04-13 00:48:09.027945 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-04-13 00:48:09.027951 | orchestrator | Monday 13 April 2026 00:47:49 +0000 (0:00:56.183) 0:00:59.795 ********** 2026-04-13 00:48:09.027958 | orchestrator | changed: [testbed-manager] 2026-04-13 00:48:09.027965 | orchestrator | 2026-04-13 00:48:09.027971 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:48:09.027978 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:48:09.027986 | orchestrator | 2026-04-13 00:48:09.027993 | orchestrator | 2026-04-13 00:48:09.028000 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:48:09.028006 | orchestrator | Monday 13 April 2026 00:47:52 +0000 (0:00:03.227) 0:01:03.023 ********** 2026-04-13 00:48:09.028013 | orchestrator | =============================================================================== 2026-04-13 00:48:09.028019 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 56.18s 2026-04-13 00:48:09.028046 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.23s 2026-04-13 00:48:09.028054 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.41s 2026-04-13 00:48:09.028060 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.24s 2026-04-13 00:48:09.028067 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.65s 2026-04-13 00:48:09.028073 | orchestrator | 2026-04-13 00:48:09.028080 | orchestrator | 2026-04-13 00:48:09.028087 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 00:48:09.028093 | orchestrator | 2026-04-13 00:48:09.028100 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 00:48:09.028107 | orchestrator | Monday 13 April 2026 00:47:54 +0000 (0:00:00.425) 0:00:00.425 ********** 2026-04-13 00:48:09.028113 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:48:09.028120 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:48:09.028127 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:48:09.028134 | orchestrator | 2026-04-13 00:48:09.028146 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 00:48:09.028157 | orchestrator | Monday 13 April 2026 00:47:54 +0000 (0:00:00.418) 0:00:00.843 ********** 2026-04-13 00:48:09.028167 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-04-13 00:48:09.028178 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-04-13 00:48:09.028189 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-04-13 00:48:09.028200 | orchestrator | 2026-04-13 00:48:09.028212 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-04-13 00:48:09.028224 | orchestrator | 2026-04-13 00:48:09.028235 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-04-13 00:48:09.028242 | orchestrator | Monday 13 April 2026 00:47:55 +0000 (0:00:00.658) 0:00:01.502 ********** 2026-04-13 00:48:09.028249 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:48:09.028256 | orchestrator | 2026-04-13 00:48:09.028262 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-04-13 00:48:09.028269 | orchestrator | Monday 13 April 2026 00:47:56 +0000 (0:00:00.713) 0:00:02.216 ********** 2026-04-13 00:48:09.028276 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-13 00:48:09.028283 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-13 00:48:09.028289 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-13 00:48:09.028296 | orchestrator | 2026-04-13 00:48:09.028303 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-04-13 00:48:09.028310 | orchestrator | Monday 13 April 2026 00:47:57 +0000 (0:00:01.681) 0:00:03.897 ********** 2026-04-13 00:48:09.028316 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-13 00:48:09.028323 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-13 00:48:09.028330 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-13 00:48:09.028336 | orchestrator | 2026-04-13 00:48:09.028345 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-04-13 00:48:09.028388 | orchestrator | Monday 13 April 2026 00:47:59 +0000 (0:00:02.027) 0:00:05.924 ********** 2026-04-13 00:48:09.028416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2024.2/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-13 00:48:09.028435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2024.2/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-13 00:48:09.028444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2024.2/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-13 00:48:09.028452 | orchestrator | 2026-04-13 00:48:09.028460 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-04-13 00:48:09.028469 | orchestrator | Monday 13 April 2026 00:48:01 +0000 (0:00:01.838) 0:00:07.763 ********** 2026-04-13 00:48:09.028477 | orchestrator | changed: [testbed-node-0] => { 2026-04-13 00:48:09.028485 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:48:09.028493 | orchestrator | } 2026-04-13 00:48:09.028501 | orchestrator | changed: [testbed-node-1] => { 2026-04-13 00:48:09.028509 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:48:09.028517 | orchestrator | } 2026-04-13 00:48:09.028526 | orchestrator | changed: [testbed-node-2] => { 2026-04-13 00:48:09.028536 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:48:09.028545 | orchestrator | } 2026-04-13 00:48:09.028554 | orchestrator | 2026-04-13 00:48:09.028563 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-13 00:48:09.028573 | orchestrator | Monday 13 April 2026 00:48:02 +0000 (0:00:00.795) 0:00:08.559 ********** 2026-04-13 00:48:09.028583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2024.2/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-13 00:48:09.028593 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:48:09.028614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2024.2/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-13 00:48:09.028630 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:48:09.028641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2024.2/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-13 00:48:09.028651 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:48:09.028660 | orchestrator | 2026-04-13 00:48:09.028670 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-04-13 00:48:09.028679 | orchestrator | Monday 13 April 2026 00:48:04 +0000 (0:00:02.515) 0:00:11.074 ********** 2026-04-13 00:48:09.028695 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fmemcached\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_5l6vy85y/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_5l6vy85y/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_5l6vy85y/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_5l6vy85y/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fmemcached: Internal Server Error (\"unknown: repository kolla/release/2024.2/memcached not found\")\\n'"} 2026-04-13 00:48:09.028721 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fmemcached\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_9j1982dd/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_9j1982dd/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_9j1982dd/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_9j1982dd/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fmemcached: Internal Server Error (\"unknown: repository kolla/release/2024.2/memcached not found\")\\n'"} 2026-04-13 00:48:09.028743 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fmemcached\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_haj6l9le/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_haj6l9le/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_haj6l9le/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_haj6l9le/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fmemcached: Internal Server Error (\"unknown: repository kolla/release/2024.2/memcached not found\")\\n'"} 2026-04-13 00:48:09.028758 | orchestrator | 2026-04-13 00:48:09.028766 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:48:09.028775 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-13 00:48:09.028784 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-13 00:48:09.028792 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-13 00:48:09.028800 | orchestrator | 2026-04-13 00:48:09.028808 | orchestrator | 2026-04-13 00:48:09.028815 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:48:09.028823 | orchestrator | Monday 13 April 2026 00:48:06 +0000 (0:00:02.008) 0:00:13.083 ********** 2026-04-13 00:48:09.028831 | orchestrator | =============================================================================== 2026-04-13 00:48:09.028839 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.52s 2026-04-13 00:48:09.028847 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.03s 2026-04-13 00:48:09.028855 | orchestrator | memcached : Restart memcached container --------------------------------- 2.01s 2026-04-13 00:48:09.028862 | orchestrator | service-check-containers : memcached | Check containers ----------------- 1.84s 2026-04-13 00:48:09.028870 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.68s 2026-04-13 00:48:09.028878 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 0.80s 2026-04-13 00:48:09.028886 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.71s 2026-04-13 00:48:09.028894 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.66s 2026-04-13 00:48:09.028902 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.42s 2026-04-13 00:48:09.028910 | orchestrator | 2026-04-13 00:48:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:12.090417 | orchestrator | 2026-04-13 00:48:12 | INFO  | Task ea4c60b7-16be-4532-9e96-e07dbc9e7bd7 is in state STARTED 2026-04-13 00:48:12.091886 | orchestrator | 2026-04-13 00:48:12 | INFO  | Task e748bbd9-6f77-4ca4-9983-efc493b0a3a7 is in state STARTED 2026-04-13 00:48:12.093347 | orchestrator | 2026-04-13 00:48:12 | INFO  | Task a0047f33-c077-44e3-9c04-ad337ec6db8e is in state STARTED 2026-04-13 00:48:12.096243 | orchestrator | 2026-04-13 00:48:12 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:48:12.097705 | orchestrator | 2026-04-13 00:48:12 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:48:12.100427 | orchestrator | 2026-04-13 00:48:12 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:48:12.100506 | orchestrator | 2026-04-13 00:48:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:15.130725 | orchestrator | 2026-04-13 00:48:15 | INFO  | Task ea4c60b7-16be-4532-9e96-e07dbc9e7bd7 is in state STARTED 2026-04-13 00:48:15.132858 | orchestrator | 2026-04-13 00:48:15 | INFO  | Task e748bbd9-6f77-4ca4-9983-efc493b0a3a7 is in state STARTED 2026-04-13 00:48:15.134176 | orchestrator | 2026-04-13 00:48:15 | INFO  | Task a0047f33-c077-44e3-9c04-ad337ec6db8e is in state STARTED 2026-04-13 00:48:15.134757 | orchestrator | 2026-04-13 00:48:15 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:48:15.135498 | orchestrator | 2026-04-13 00:48:15 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:48:15.136005 | orchestrator | 2026-04-13 00:48:15 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:48:15.136039 | orchestrator | 2026-04-13 00:48:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:18.173315 | orchestrator | 2026-04-13 00:48:18 | INFO  | Task ea4c60b7-16be-4532-9e96-e07dbc9e7bd7 is in state SUCCESS 2026-04-13 00:48:18.175658 | orchestrator | 2026-04-13 00:48:18.175742 | orchestrator | 2026-04-13 00:48:18.175763 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 00:48:18.175780 | orchestrator | 2026-04-13 00:48:18.175796 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 00:48:18.175811 | orchestrator | Monday 13 April 2026 00:47:55 +0000 (0:00:00.800) 0:00:00.800 ********** 2026-04-13 00:48:18.175827 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:48:18.175844 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:48:18.175860 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:48:18.175875 | orchestrator | 2026-04-13 00:48:18.175891 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 00:48:18.175908 | orchestrator | Monday 13 April 2026 00:47:55 +0000 (0:00:00.342) 0:00:01.143 ********** 2026-04-13 00:48:18.175926 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-04-13 00:48:18.175944 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-04-13 00:48:18.175992 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-04-13 00:48:18.176005 | orchestrator | 2026-04-13 00:48:18.176014 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-04-13 00:48:18.176024 | orchestrator | 2026-04-13 00:48:18.176034 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-04-13 00:48:18.176043 | orchestrator | Monday 13 April 2026 00:47:56 +0000 (0:00:00.398) 0:00:01.541 ********** 2026-04-13 00:48:18.176053 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:48:18.176063 | orchestrator | 2026-04-13 00:48:18.176073 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-04-13 00:48:18.176083 | orchestrator | Monday 13 April 2026 00:47:57 +0000 (0:00:00.935) 0:00:02.477 ********** 2026-04-13 00:48:18.176096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-13 00:48:18.176113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-13 00:48:18.176149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-13 00:48:18.176161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-13 00:48:18.176224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-13 00:48:18.176238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-13 00:48:18.176250 | orchestrator | 2026-04-13 00:48:18.176262 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-04-13 00:48:18.176273 | orchestrator | Monday 13 April 2026 00:47:59 +0000 (0:00:01.850) 0:00:04.328 ********** 2026-04-13 00:48:18.176285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-13 00:48:18.176297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-13 00:48:18.176316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-13 00:48:18.176328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-13 00:48:18.176352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-13 00:48:18.176364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-13 00:48:18.176406 | orchestrator | 2026-04-13 00:48:18.176421 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-04-13 00:48:18.176432 | orchestrator | Monday 13 April 2026 00:48:02 +0000 (0:00:03.398) 0:00:07.727 ********** 2026-04-13 00:48:18.176444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-13 00:48:18.176462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-13 00:48:18.176475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-13 00:48:18.176486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-13 00:48:18.176537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-13 00:48:18.176550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-13 00:48:18.176560 | orchestrator | 2026-04-13 00:48:18.176570 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-04-13 00:48:18.176579 | orchestrator | Monday 13 April 2026 00:48:05 +0000 (0:00:03.477) 0:00:11.205 ********** 2026-04-13 00:48:18.176589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-13 00:48:18.176606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-13 00:48:18.176616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-13 00:48:18.176626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-13 00:48:18.176647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-13 00:48:18.176658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-13 00:48:18.176668 | orchestrator | 2026-04-13 00:48:18.176678 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-04-13 00:48:18.176687 | orchestrator | Monday 13 April 2026 00:48:08 +0000 (0:00:02.549) 0:00:13.754 ********** 2026-04-13 00:48:18.176697 | orchestrator | changed: [testbed-node-0] => { 2026-04-13 00:48:18.176708 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:48:18.176724 | orchestrator | } 2026-04-13 00:48:18.176734 | orchestrator | changed: [testbed-node-1] => { 2026-04-13 00:48:18.176768 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:48:18.176779 | orchestrator | } 2026-04-13 00:48:18.176788 | orchestrator | changed: [testbed-node-2] => { 2026-04-13 00:48:18.176798 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:48:18.176807 | orchestrator | } 2026-04-13 00:48:18.176817 | orchestrator | 2026-04-13 00:48:18.176835 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-13 00:48:18.176850 | orchestrator | Monday 13 April 2026 00:48:09 +0000 (0:00:01.060) 0:00:14.814 ********** 2026-04-13 00:48:18.176865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-13 00:48:18.176881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-13 00:48:18.176896 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:48:18.176910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-13 00:48:18.176932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-13 00:48:18.176949 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:48:18.176978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-13 00:48:18.177009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-13 00:48:18.177023 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:48:18.177033 | orchestrator | 2026-04-13 00:48:18.177042 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-13 00:48:18.177052 | orchestrator | Monday 13 April 2026 00:48:11 +0000 (0:00:01.928) 0:00:16.743 ********** 2026-04-13 00:48:18.177061 | orchestrator | 2026-04-13 00:48:18.177070 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-13 00:48:18.177080 | orchestrator | Monday 13 April 2026 00:48:11 +0000 (0:00:00.224) 0:00:16.967 ********** 2026-04-13 00:48:18.177089 | orchestrator | 2026-04-13 00:48:18.177098 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-13 00:48:18.177108 | orchestrator | Monday 13 April 2026 00:48:11 +0000 (0:00:00.171) 0:00:17.138 ********** 2026-04-13 00:48:18.177117 | orchestrator | 2026-04-13 00:48:18.177126 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-04-13 00:48:18.177136 | orchestrator | Monday 13 April 2026 00:48:12 +0000 (0:00:00.116) 0:00:17.255 ********** 2026-04-13 00:48:18.177156 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fredis\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_e8t3on2h/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_e8t3on2h/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_e8t3on2h/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_e8t3on2h/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fredis: Internal Server Error (\"unknown: repository kolla/release/2024.2/redis not found\")\\n'"} 2026-04-13 00:48:18.177194 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fredis\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_f2so1xsd/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_f2so1xsd/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_f2so1xsd/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_f2so1xsd/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fredis: Internal Server Error (\"unknown: repository kolla/release/2024.2/redis not found\")\\n'"} 2026-04-13 00:48:18.177225 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fredis\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_tpasv73j/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_tpasv73j/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_tpasv73j/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_tpasv73j/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fredis: Internal Server Error (\"unknown: repository kolla/release/2024.2/redis not found\")\\n'"} 2026-04-13 00:48:18.177243 | orchestrator | 2026-04-13 00:48:18.177253 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:48:18.177263 | orchestrator | testbed-node-0 : ok=8  changed=5  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-13 00:48:18.177274 | orchestrator | testbed-node-1 : ok=8  changed=5  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-13 00:48:18.177284 | orchestrator | testbed-node-2 : ok=8  changed=5  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-13 00:48:18.177293 | orchestrator | 2026-04-13 00:48:18.177303 | orchestrator | 2026-04-13 00:48:18.177312 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:48:18.177322 | orchestrator | Monday 13 April 2026 00:48:15 +0000 (0:00:03.081) 0:00:20.337 ********** 2026-04-13 00:48:18.177331 | orchestrator | =============================================================================== 2026-04-13 00:48:18.177341 | orchestrator | redis : Copying over redis config files --------------------------------- 3.48s 2026-04-13 00:48:18.177350 | orchestrator | redis : Copying over default config.json files -------------------------- 3.40s 2026-04-13 00:48:18.177360 | orchestrator | redis : Restart redis container ----------------------------------------- 3.08s 2026-04-13 00:48:18.177369 | orchestrator | service-check-containers : redis | Check containers --------------------- 2.55s 2026-04-13 00:48:18.177406 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.93s 2026-04-13 00:48:18.177417 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.85s 2026-04-13 00:48:18.177426 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 1.06s 2026-04-13 00:48:18.177436 | orchestrator | redis : include_tasks --------------------------------------------------- 0.94s 2026-04-13 00:48:18.177445 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.51s 2026-04-13 00:48:18.177454 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.40s 2026-04-13 00:48:18.177464 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-04-13 00:48:18.177474 | orchestrator | 2026-04-13 00:48:18 | INFO  | Task e748bbd9-6f77-4ca4-9983-efc493b0a3a7 is in state STARTED 2026-04-13 00:48:18.177620 | orchestrator | 2026-04-13 00:48:18 | INFO  | Task a0047f33-c077-44e3-9c04-ad337ec6db8e is in state STARTED 2026-04-13 00:48:18.177635 | orchestrator | 2026-04-13 00:48:18 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:48:18.178611 | orchestrator | 2026-04-13 00:48:18 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:48:18.180758 | orchestrator | 2026-04-13 00:48:18 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:48:18.180823 | orchestrator | 2026-04-13 00:48:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:21.211578 | orchestrator | 2026-04-13 00:48:21 | INFO  | Task e748bbd9-6f77-4ca4-9983-efc493b0a3a7 is in state STARTED 2026-04-13 00:48:21.218280 | orchestrator | 2026-04-13 00:48:21 | INFO  | Task a0047f33-c077-44e3-9c04-ad337ec6db8e is in state STARTED 2026-04-13 00:48:21.218842 | orchestrator | 2026-04-13 00:48:21 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:48:21.219790 | orchestrator | 2026-04-13 00:48:21 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:48:21.222166 | orchestrator | 2026-04-13 00:48:21 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:48:21.222200 | orchestrator | 2026-04-13 00:48:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:24.263977 | orchestrator | 2026-04-13 00:48:24 | INFO  | Task e748bbd9-6f77-4ca4-9983-efc493b0a3a7 is in state STARTED 2026-04-13 00:48:24.265295 | orchestrator | 2026-04-13 00:48:24 | INFO  | Task a0047f33-c077-44e3-9c04-ad337ec6db8e is in state STARTED 2026-04-13 00:48:24.266185 | orchestrator | 2026-04-13 00:48:24 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:48:24.267876 | orchestrator | 2026-04-13 00:48:24 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:48:24.268891 | orchestrator | 2026-04-13 00:48:24 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:48:24.268931 | orchestrator | 2026-04-13 00:48:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:27.315805 | orchestrator | 2026-04-13 00:48:27 | INFO  | Task e748bbd9-6f77-4ca4-9983-efc493b0a3a7 is in state STARTED 2026-04-13 00:48:27.317525 | orchestrator | 2026-04-13 00:48:27 | INFO  | Task a0047f33-c077-44e3-9c04-ad337ec6db8e is in state STARTED 2026-04-13 00:48:27.318952 | orchestrator | 2026-04-13 00:48:27 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:48:27.319831 | orchestrator | 2026-04-13 00:48:27 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:48:27.321535 | orchestrator | 2026-04-13 00:48:27 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:48:27.321564 | orchestrator | 2026-04-13 00:48:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:30.373612 | orchestrator | 2026-04-13 00:48:30 | INFO  | Task e841abb4-2808-4e5c-822c-838afb178313 is in state STARTED 2026-04-13 00:48:30.376773 | orchestrator | 2026-04-13 00:48:30 | INFO  | Task e748bbd9-6f77-4ca4-9983-efc493b0a3a7 is in state STARTED 2026-04-13 00:48:30.379368 | orchestrator | 2026-04-13 00:48:30 | INFO  | Task a0047f33-c077-44e3-9c04-ad337ec6db8e is in state SUCCESS 2026-04-13 00:48:30.381350 | orchestrator | 2026-04-13 00:48:30.381426 | orchestrator | 2026-04-13 00:48:30.381440 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 00:48:30.381454 | orchestrator | 2026-04-13 00:48:30.381464 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 00:48:30.381474 | orchestrator | Monday 13 April 2026 00:47:55 +0000 (0:00:00.657) 0:00:00.657 ********** 2026-04-13 00:48:30.381482 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:48:30.381494 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:48:30.381503 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:48:30.381511 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:48:30.381518 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:48:30.381553 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:48:30.381562 | orchestrator | 2026-04-13 00:48:30.381569 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 00:48:30.381577 | orchestrator | Monday 13 April 2026 00:47:56 +0000 (0:00:00.856) 0:00:01.513 ********** 2026-04-13 00:48:30.381585 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-13 00:48:30.381593 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-13 00:48:30.381600 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-13 00:48:30.381609 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-13 00:48:30.381616 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-13 00:48:30.381623 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-13 00:48:30.381631 | orchestrator | 2026-04-13 00:48:30.381639 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-04-13 00:48:30.381647 | orchestrator | 2026-04-13 00:48:30.381655 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-04-13 00:48:30.381663 | orchestrator | Monday 13 April 2026 00:47:57 +0000 (0:00:01.056) 0:00:02.570 ********** 2026-04-13 00:48:30.381673 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:48:30.381683 | orchestrator | 2026-04-13 00:48:30.381692 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-13 00:48:30.381701 | orchestrator | Monday 13 April 2026 00:47:58 +0000 (0:00:01.816) 0:00:04.387 ********** 2026-04-13 00:48:30.381708 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-13 00:48:30.381718 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-13 00:48:30.381740 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-13 00:48:30.381749 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-13 00:48:30.381757 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-13 00:48:30.381765 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-13 00:48:30.381774 | orchestrator | 2026-04-13 00:48:30.381783 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-13 00:48:30.381791 | orchestrator | Monday 13 April 2026 00:48:01 +0000 (0:00:02.533) 0:00:06.920 ********** 2026-04-13 00:48:30.381800 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-13 00:48:30.381808 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-13 00:48:30.381815 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-13 00:48:30.381823 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-13 00:48:30.381831 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-13 00:48:30.381840 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-13 00:48:30.381849 | orchestrator | 2026-04-13 00:48:30.381857 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-13 00:48:30.381865 | orchestrator | Monday 13 April 2026 00:48:03 +0000 (0:00:02.297) 0:00:09.218 ********** 2026-04-13 00:48:30.381874 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-04-13 00:48:30.381882 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:48:30.381891 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-04-13 00:48:30.381900 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:48:30.381908 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-04-13 00:48:30.381917 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:48:30.381925 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-04-13 00:48:30.381933 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:48:30.381941 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-04-13 00:48:30.381959 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:48:30.381968 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-04-13 00:48:30.381976 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:48:30.381987 | orchestrator | 2026-04-13 00:48:30.381997 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-04-13 00:48:30.382007 | orchestrator | Monday 13 April 2026 00:48:05 +0000 (0:00:02.029) 0:00:11.247 ********** 2026-04-13 00:48:30.382084 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:48:30.382094 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:48:30.382103 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:48:30.382112 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:48:30.382120 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:48:30.382130 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:48:30.382140 | orchestrator | 2026-04-13 00:48:30.382148 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-04-13 00:48:30.382157 | orchestrator | Monday 13 April 2026 00:48:07 +0000 (0:00:01.299) 0:00:12.546 ********** 2026-04-13 00:48:30.382188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-13 00:48:30.382201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-13 00:48:30.382216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-13 00:48:30.382224 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-13 00:48:30.382244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-13 00:48:30.382278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-13 00:48:30.382289 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-13 00:48:30.382297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-13 00:48:30.382310 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-13 00:48:30.382319 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-13 00:48:30.382334 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-13 00:48:30.382351 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-13 00:48:30.382360 | orchestrator | 2026-04-13 00:48:30.382369 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-04-13 00:48:30.382378 | orchestrator | Monday 13 April 2026 00:48:10 +0000 (0:00:02.974) 0:00:15.521 ********** 2026-04-13 00:48:30.382386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-13 00:48:30.382460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-13 00:48:30.382473 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-13 00:48:30.382494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-13 00:48:30.382508 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-13 00:48:30.382517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-13 00:48:30.382526 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-13 00:48:30.382538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-13 00:48:30.382551 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-13 00:48:30.382559 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-13 00:48:30.382575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-13 00:48:30.382583 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-13 00:48:30.382591 | orchestrator | 2026-04-13 00:48:30.382600 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-04-13 00:48:30.382608 | orchestrator | Monday 13 April 2026 00:48:14 +0000 (0:00:04.610) 0:00:20.131 ********** 2026-04-13 00:48:30.382616 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:48:30.382625 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:48:30.382633 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:48:30.382642 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:48:30.382650 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:48:30.382658 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:48:30.382667 | orchestrator | 2026-04-13 00:48:30.382675 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-04-13 00:48:30.382683 | orchestrator | Monday 13 April 2026 00:48:15 +0000 (0:00:00.944) 0:00:21.075 ********** 2026-04-13 00:48:30.382695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-13 00:48:30.382709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-13 00:48:30.382718 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-13 00:48:30.382734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-13 00:48:30.382742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-13 00:48:30.382750 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-13 00:48:30.382772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-13 00:48:30.382780 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-13 00:48:30.382788 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-13 00:48:30.382803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-13 00:48:30.382811 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-13 00:48:30.382827 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-13 00:48:30.382836 | orchestrator | 2026-04-13 00:48:30.382844 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-04-13 00:48:30.382852 | orchestrator | Monday 13 April 2026 00:48:18 +0000 (0:00:02.781) 0:00:23.857 ********** 2026-04-13 00:48:30.382860 | orchestrator | changed: [testbed-node-0] => { 2026-04-13 00:48:30.382868 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:48:30.382875 | orchestrator | } 2026-04-13 00:48:30.382884 | orchestrator | changed: [testbed-node-1] => { 2026-04-13 00:48:30.382891 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:48:30.382899 | orchestrator | } 2026-04-13 00:48:30.382907 | orchestrator | changed: [testbed-node-2] => { 2026-04-13 00:48:30.382915 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:48:30.382923 | orchestrator | } 2026-04-13 00:48:30.382930 | orchestrator | changed: [testbed-node-3] => { 2026-04-13 00:48:30.382938 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:48:30.382946 | orchestrator | } 2026-04-13 00:48:30.382954 | orchestrator | changed: [testbed-node-4] => { 2026-04-13 00:48:30.382961 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:48:30.382969 | orchestrator | } 2026-04-13 00:48:30.382978 | orchestrator | changed: [testbed-node-5] => { 2026-04-13 00:48:30.382987 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:48:30.382995 | orchestrator | } 2026-04-13 00:48:30.383003 | orchestrator | 2026-04-13 00:48:30.383011 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-13 00:48:30.383020 | orchestrator | Monday 13 April 2026 00:48:19 +0000 (0:00:00.766) 0:00:24.623 ********** 2026-04-13 00:48:30.383028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-13 00:48:30.383043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-13 00:48:30.383051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-13 00:48:30.383067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-13 00:48:30.383094 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:48:30.383107 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:48:30.383115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-13 00:48:30.383124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-13 00:48:30.383133 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:48:30.383141 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-13 00:48:30.383154 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-13 00:48:30.383168 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:48:30.383176 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-13 00:48:30.383184 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-13 00:48:30.383193 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:48:30.383201 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-13 00:48:30.383210 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-13 00:48:30.383218 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:48:30.383226 | orchestrator | 2026-04-13 00:48:30.383235 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-13 00:48:30.383244 | orchestrator | Monday 13 April 2026 00:48:21 +0000 (0:00:02.779) 0:00:27.403 ********** 2026-04-13 00:48:30.383252 | orchestrator | 2026-04-13 00:48:30.383260 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-13 00:48:30.383274 | orchestrator | Monday 13 April 2026 00:48:22 +0000 (0:00:00.376) 0:00:27.779 ********** 2026-04-13 00:48:30.383282 | orchestrator | 2026-04-13 00:48:30.383296 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-13 00:48:30.383303 | orchestrator | Monday 13 April 2026 00:48:22 +0000 (0:00:00.136) 0:00:27.915 ********** 2026-04-13 00:48:30.383311 | orchestrator | 2026-04-13 00:48:30.383320 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-13 00:48:30.383328 | orchestrator | Monday 13 April 2026 00:48:22 +0000 (0:00:00.163) 0:00:28.079 ********** 2026-04-13 00:48:30.383336 | orchestrator | 2026-04-13 00:48:30.383345 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-13 00:48:30.384042 | orchestrator | Monday 13 April 2026 00:48:22 +0000 (0:00:00.179) 0:00:28.259 ********** 2026-04-13 00:48:30.384080 | orchestrator | 2026-04-13 00:48:30.384089 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-13 00:48:30.384096 | orchestrator | Monday 13 April 2026 00:48:22 +0000 (0:00:00.167) 0:00:28.426 ********** 2026-04-13 00:48:30.384103 | orchestrator | 2026-04-13 00:48:30.384111 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-04-13 00:48:30.384118 | orchestrator | Monday 13 April 2026 00:48:23 +0000 (0:00:00.189) 0:00:28.616 ********** 2026-04-13 00:48:30.384130 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_nydkh6gc/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_nydkh6gc/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_nydkh6gc/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_nydkh6gc/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopenvswitch-db-server: Internal Server Error (\"unknown: repository kolla/release/2024.2/openvswitch-db-server not found\")\\n'"} 2026-04-13 00:48:30.384159 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_cg3_fbqd/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_cg3_fbqd/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_cg3_fbqd/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_cg3_fbqd/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopenvswitch-db-server: Internal Server Error (\"unknown: repository kolla/release/2024.2/openvswitch-db-server not found\")\\n'"} 2026-04-13 00:48:30.384187 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_k4de05ns/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_k4de05ns/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_k4de05ns/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_k4de05ns/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopenvswitch-db-server: Internal Server Error (\"unknown: repository kolla/release/2024.2/openvswitch-db-server not found\")\\n'"} 2026-04-13 00:48:30.384210 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_pgej400r/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_pgej400r/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_pgej400r/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_pgej400r/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopenvswitch-db-server: Internal Server Error (\"unknown: repository kolla/release/2024.2/openvswitch-db-server not found\")\\n'"} 2026-04-13 00:48:30.384229 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_oa4flwl6/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_oa4flwl6/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_oa4flwl6/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_oa4flwl6/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopenvswitch-db-server: Internal Server Error (\"unknown: repository kolla/release/2024.2/openvswitch-db-server not found\")\\n'"} 2026-04-13 00:48:30.384244 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_3c23eeyl/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_3c23eeyl/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_3c23eeyl/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_3c23eeyl/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopenvswitch-db-server: Internal Server Error (\"unknown: repository kolla/release/2024.2/openvswitch-db-server not found\")\\n'"} 2026-04-13 00:48:30.384260 | orchestrator | 2026-04-13 00:48:30.384270 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:48:30.384278 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-13 00:48:30.384292 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-13 00:48:30.384301 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-13 00:48:30.384309 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-13 00:48:30.384321 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-13 00:48:30.384329 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-13 00:48:30.384338 | orchestrator | 2026-04-13 00:48:30.384347 | orchestrator | 2026-04-13 00:48:30.384356 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:48:30.384364 | orchestrator | Monday 13 April 2026 00:48:27 +0000 (0:00:04.145) 0:00:32.761 ********** 2026-04-13 00:48:30.384372 | orchestrator | =============================================================================== 2026-04-13 00:48:30.384381 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.61s 2026-04-13 00:48:30.384389 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 4.15s 2026-04-13 00:48:30.384421 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.97s 2026-04-13 00:48:30.384430 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 2.78s 2026-04-13 00:48:30.384438 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.77s 2026-04-13 00:48:30.384447 | orchestrator | module-load : Load modules ---------------------------------------------- 2.53s 2026-04-13 00:48:30.384454 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.30s 2026-04-13 00:48:30.384462 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.03s 2026-04-13 00:48:30.384469 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.82s 2026-04-13 00:48:30.384477 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.30s 2026-04-13 00:48:30.384485 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.21s 2026-04-13 00:48:30.384493 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.06s 2026-04-13 00:48:30.384501 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.94s 2026-04-13 00:48:30.384509 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.86s 2026-04-13 00:48:30.384517 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 0.77s 2026-04-13 00:48:30.384623 | orchestrator | 2026-04-13 00:48:30 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:48:30.385189 | orchestrator | 2026-04-13 00:48:30 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:48:30.388492 | orchestrator | 2026-04-13 00:48:30 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:48:30.388555 | orchestrator | 2026-04-13 00:48:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:33.427924 | orchestrator | 2026-04-13 00:48:33 | INFO  | Task e841abb4-2808-4e5c-822c-838afb178313 is in state STARTED 2026-04-13 00:48:33.428140 | orchestrator | 2026-04-13 00:48:33 | INFO  | Task e748bbd9-6f77-4ca4-9983-efc493b0a3a7 is in state STARTED 2026-04-13 00:48:33.429744 | orchestrator | 2026-04-13 00:48:33 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:48:33.431343 | orchestrator | 2026-04-13 00:48:33 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:48:33.432453 | orchestrator | 2026-04-13 00:48:33 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:48:33.432499 | orchestrator | 2026-04-13 00:48:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:36.488794 | orchestrator | 2026-04-13 00:48:36 | INFO  | Task e841abb4-2808-4e5c-822c-838afb178313 is in state STARTED 2026-04-13 00:48:36.489293 | orchestrator | 2026-04-13 00:48:36 | INFO  | Task e748bbd9-6f77-4ca4-9983-efc493b0a3a7 is in state STARTED 2026-04-13 00:48:36.490783 | orchestrator | 2026-04-13 00:48:36 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:48:36.491602 | orchestrator | 2026-04-13 00:48:36 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:48:36.493490 | orchestrator | 2026-04-13 00:48:36 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:48:36.493536 | orchestrator | 2026-04-13 00:48:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:39.548447 | orchestrator | 2026-04-13 00:48:39 | INFO  | Task e841abb4-2808-4e5c-822c-838afb178313 is in state STARTED 2026-04-13 00:48:39.550950 | orchestrator | 2026-04-13 00:48:39 | INFO  | Task e748bbd9-6f77-4ca4-9983-efc493b0a3a7 is in state STARTED 2026-04-13 00:48:39.551734 | orchestrator | 2026-04-13 00:48:39 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:48:39.552618 | orchestrator | 2026-04-13 00:48:39 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:48:39.554107 | orchestrator | 2026-04-13 00:48:39 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:48:39.554295 | orchestrator | 2026-04-13 00:48:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:42.595007 | orchestrator | 2026-04-13 00:48:42 | INFO  | Task e841abb4-2808-4e5c-822c-838afb178313 is in state STARTED 2026-04-13 00:48:42.596074 | orchestrator | 2026-04-13 00:48:42 | INFO  | Task e748bbd9-6f77-4ca4-9983-efc493b0a3a7 is in state STARTED 2026-04-13 00:48:42.597439 | orchestrator | 2026-04-13 00:48:42 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:48:42.598853 | orchestrator | 2026-04-13 00:48:42 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:48:42.600701 | orchestrator | 2026-04-13 00:48:42 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:48:42.600734 | orchestrator | 2026-04-13 00:48:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:45.645767 | orchestrator | 2026-04-13 00:48:45 | INFO  | Task e841abb4-2808-4e5c-822c-838afb178313 is in state STARTED 2026-04-13 00:48:45.646279 | orchestrator | 2026-04-13 00:48:45 | INFO  | Task e748bbd9-6f77-4ca4-9983-efc493b0a3a7 is in state STARTED 2026-04-13 00:48:45.647161 | orchestrator | 2026-04-13 00:48:45 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:48:45.647988 | orchestrator | 2026-04-13 00:48:45 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:48:45.649258 | orchestrator | 2026-04-13 00:48:45 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:48:45.649352 | orchestrator | 2026-04-13 00:48:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:48.683977 | orchestrator | 2026-04-13 00:48:48 | INFO  | Task e841abb4-2808-4e5c-822c-838afb178313 is in state STARTED 2026-04-13 00:48:48.685029 | orchestrator | 2026-04-13 00:48:48 | INFO  | Task e748bbd9-6f77-4ca4-9983-efc493b0a3a7 is in state STARTED 2026-04-13 00:48:48.685814 | orchestrator | 2026-04-13 00:48:48 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:48:48.687339 | orchestrator | 2026-04-13 00:48:48 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:48:48.688072 | orchestrator | 2026-04-13 00:48:48 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:48:48.688110 | orchestrator | 2026-04-13 00:48:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:51.713766 | orchestrator | 2026-04-13 00:48:51 | INFO  | Task e841abb4-2808-4e5c-822c-838afb178313 is in state STARTED 2026-04-13 00:48:51.713843 | orchestrator | 2026-04-13 00:48:51 | INFO  | Task e748bbd9-6f77-4ca4-9983-efc493b0a3a7 is in state STARTED 2026-04-13 00:48:51.714062 | orchestrator | 2026-04-13 00:48:51 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:48:51.714699 | orchestrator | 2026-04-13 00:48:51 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:48:51.715475 | orchestrator | 2026-04-13 00:48:51 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:48:51.715530 | orchestrator | 2026-04-13 00:48:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:54.762865 | orchestrator | 2026-04-13 00:48:54 | INFO  | Task e841abb4-2808-4e5c-822c-838afb178313 is in state SUCCESS 2026-04-13 00:48:54.764591 | orchestrator | 2026-04-13 00:48:54.764625 | orchestrator | 2026-04-13 00:48:54.764632 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 00:48:54.764638 | orchestrator | 2026-04-13 00:48:54.764644 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 00:48:54.764649 | orchestrator | Monday 13 April 2026 00:48:32 +0000 (0:00:00.286) 0:00:00.286 ********** 2026-04-13 00:48:54.764655 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:48:54.764662 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:48:54.764667 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:48:54.764672 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:48:54.764678 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:48:54.764683 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:48:54.764688 | orchestrator | 2026-04-13 00:48:54.764693 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 00:48:54.764698 | orchestrator | Monday 13 April 2026 00:48:33 +0000 (0:00:01.219) 0:00:01.506 ********** 2026-04-13 00:48:54.764704 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-04-13 00:48:54.764710 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-04-13 00:48:54.764715 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-04-13 00:48:54.764720 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-04-13 00:48:54.764725 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-04-13 00:48:54.764730 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-04-13 00:48:54.764736 | orchestrator | 2026-04-13 00:48:54.764754 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-04-13 00:48:54.764759 | orchestrator | 2026-04-13 00:48:54.764781 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-04-13 00:48:54.764786 | orchestrator | Monday 13 April 2026 00:48:35 +0000 (0:00:02.404) 0:00:03.910 ********** 2026-04-13 00:48:54.764793 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:48:54.764799 | orchestrator | 2026-04-13 00:48:54.764804 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-04-13 00:48:54.764809 | orchestrator | Monday 13 April 2026 00:48:37 +0000 (0:00:01.558) 0:00:05.468 ********** 2026-04-13 00:48:54.764817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:48:54.764825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:48:54.764831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:48:54.764836 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:48:54.764842 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:48:54.764856 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:48:54.764862 | orchestrator | 2026-04-13 00:48:54.764867 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-04-13 00:48:54.764872 | orchestrator | Monday 13 April 2026 00:48:40 +0000 (0:00:03.107) 0:00:08.576 ********** 2026-04-13 00:48:54.764878 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:48:54.764891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:48:54.764897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:48:54.764902 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:48:54.764907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:48:54.764913 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:48:54.764918 | orchestrator | 2026-04-13 00:48:54.764923 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-04-13 00:48:54.764928 | orchestrator | Monday 13 April 2026 00:48:43 +0000 (0:00:02.660) 0:00:11.237 ********** 2026-04-13 00:48:54.764934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:48:54.764939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:48:54.764949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:48:54.764959 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:48:54.764967 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:48:54.764972 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:48:54.764977 | orchestrator | 2026-04-13 00:48:54.764983 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-04-13 00:48:54.764988 | orchestrator | Monday 13 April 2026 00:48:45 +0000 (0:00:02.183) 0:00:13.420 ********** 2026-04-13 00:48:54.764993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:48:54.764998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:48:54.765004 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:48:54.765009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:48:54.765018 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:48:54.765028 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:48:54.765034 | orchestrator | 2026-04-13 00:48:54.765039 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-04-13 00:48:54.765044 | orchestrator | Monday 13 April 2026 00:48:46 +0000 (0:00:01.720) 0:00:15.141 ********** 2026-04-13 00:48:54.765053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:48:54.765058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:48:54.765063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:48:54.765069 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:48:54.765074 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:48:54.765079 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 00:48:54.765084 | orchestrator | 2026-04-13 00:48:54.765090 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-04-13 00:48:54.765096 | orchestrator | Monday 13 April 2026 00:48:49 +0000 (0:00:02.109) 0:00:17.251 ********** 2026-04-13 00:48:54.765101 | orchestrator | changed: [testbed-node-0] => { 2026-04-13 00:48:54.765119 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:48:54.765124 | orchestrator | } 2026-04-13 00:48:54.765129 | orchestrator | changed: [testbed-node-1] => { 2026-04-13 00:48:54.765135 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:48:54.765167 | orchestrator | } 2026-04-13 00:48:54.765173 | orchestrator | changed: [testbed-node-2] => { 2026-04-13 00:48:54.765178 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:48:54.765183 | orchestrator | } 2026-04-13 00:48:54.765188 | orchestrator | changed: [testbed-node-3] => { 2026-04-13 00:48:54.765193 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:48:54.765199 | orchestrator | } 2026-04-13 00:48:54.765205 | orchestrator | changed: [testbed-node-4] => { 2026-04-13 00:48:54.765210 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:48:54.765216 | orchestrator | } 2026-04-13 00:48:54.765225 | orchestrator | changed: [testbed-node-5] => { 2026-04-13 00:48:54.765232 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:48:54.765237 | orchestrator | } 2026-04-13 00:48:54.765243 | orchestrator | 2026-04-13 00:48:54.765249 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-13 00:48:54.765255 | orchestrator | Monday 13 April 2026 00:48:49 +0000 (0:00:00.797) 0:00:18.049 ********** 2026-04-13 00:48:54.765261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:48:54.765267 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:48:54.765276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:48:54.765283 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:48:54.765289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:48:54.765295 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:48:54.765301 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:48:54.765307 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:48:54.765313 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:48:54.765319 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:48:54.765326 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:48:54.765336 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:48:54.765342 | orchestrator | 2026-04-13 00:48:54.765349 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-04-13 00:48:54.765358 | orchestrator | Monday 13 April 2026 00:48:51 +0000 (0:00:01.153) 0:00:19.202 ********** 2026-04-13 00:48:54.765366 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-13 00:48:54.765376 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-13 00:48:54.765384 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-13 00:48:54.765392 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-13 00:48:54.765401 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-13 00:48:54.765413 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-13 00:48:54.765422 | orchestrator | 2026-04-13 00:48:54.765446 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:48:54.765456 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-13 00:48:54.765465 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-13 00:48:54.765474 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-13 00:48:54.765483 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-13 00:48:54.765496 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-13 00:48:54.765505 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-13 00:48:54.765513 | orchestrator | 2026-04-13 00:48:54.765522 | orchestrator | 2026-04-13 00:48:54.765531 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:48:54.765537 | orchestrator | Monday 13 April 2026 00:48:52 +0000 (0:00:01.142) 0:00:20.345 ********** 2026-04-13 00:48:54.765544 | orchestrator | =============================================================================== 2026-04-13 00:48:54.765550 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 3.11s 2026-04-13 00:48:54.765556 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.66s 2026-04-13 00:48:54.765562 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.40s 2026-04-13 00:48:54.765567 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 2.18s 2026-04-13 00:48:54.765572 | orchestrator | service-check-containers : ovn_controller | Check containers ------------ 2.11s 2026-04-13 00:48:54.765577 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.72s 2026-04-13 00:48:54.765582 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.56s 2026-04-13 00:48:54.765592 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.22s 2026-04-13 00:48:54.765597 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.15s 2026-04-13 00:48:54.765602 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 1.14s 2026-04-13 00:48:54.765607 | orchestrator | service-check-containers : ovn_controller | Notify handlers to restart containers --- 0.80s 2026-04-13 00:48:54.765666 | orchestrator | 2026-04-13 00:48:54 | INFO  | Task e748bbd9-6f77-4ca4-9983-efc493b0a3a7 is in state SUCCESS 2026-04-13 00:48:54.765950 | orchestrator | 2026-04-13 00:48:54.765962 | orchestrator | 2026-04-13 00:48:54.765967 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-04-13 00:48:54.765973 | orchestrator | 2026-04-13 00:48:54.765978 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-13 00:48:54.765983 | orchestrator | Monday 13 April 2026 00:48:15 +0000 (0:00:00.293) 0:00:00.294 ********** 2026-04-13 00:48:54.765989 | orchestrator | ok: [localhost] => { 2026-04-13 00:48:54.765994 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-04-13 00:48:54.766000 | orchestrator | } 2026-04-13 00:48:54.766005 | orchestrator | 2026-04-13 00:48:54.766011 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-04-13 00:48:54.766048 | orchestrator | Monday 13 April 2026 00:48:15 +0000 (0:00:00.067) 0:00:00.361 ********** 2026-04-13 00:48:54.766055 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-04-13 00:48:54.766060 | orchestrator | ...ignoring 2026-04-13 00:48:54.766066 | orchestrator | 2026-04-13 00:48:54.766071 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-04-13 00:48:54.766076 | orchestrator | Monday 13 April 2026 00:48:19 +0000 (0:00:03.508) 0:00:03.869 ********** 2026-04-13 00:48:54.766081 | orchestrator | skipping: [localhost] 2026-04-13 00:48:54.766086 | orchestrator | 2026-04-13 00:48:54.766091 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-04-13 00:48:54.766096 | orchestrator | Monday 13 April 2026 00:48:19 +0000 (0:00:00.105) 0:00:03.975 ********** 2026-04-13 00:48:54.766101 | orchestrator | ok: [localhost] 2026-04-13 00:48:54.766106 | orchestrator | 2026-04-13 00:48:54.766112 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 00:48:54.766116 | orchestrator | 2026-04-13 00:48:54.766121 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 00:48:54.766126 | orchestrator | Monday 13 April 2026 00:48:19 +0000 (0:00:00.412) 0:00:04.387 ********** 2026-04-13 00:48:54.766131 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:48:54.766136 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:48:54.766141 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:48:54.766146 | orchestrator | 2026-04-13 00:48:54.766151 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 00:48:54.766156 | orchestrator | Monday 13 April 2026 00:48:19 +0000 (0:00:00.318) 0:00:04.706 ********** 2026-04-13 00:48:54.766161 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-04-13 00:48:54.766167 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-04-13 00:48:54.766172 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-04-13 00:48:54.766177 | orchestrator | 2026-04-13 00:48:54.766182 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-04-13 00:48:54.766187 | orchestrator | 2026-04-13 00:48:54.766192 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-13 00:48:54.766197 | orchestrator | Monday 13 April 2026 00:48:20 +0000 (0:00:01.011) 0:00:05.718 ********** 2026-04-13 00:48:54.766202 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:48:54.766226 | orchestrator | 2026-04-13 00:48:54.766231 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-13 00:48:54.766236 | orchestrator | Monday 13 April 2026 00:48:22 +0000 (0:00:01.167) 0:00:06.885 ********** 2026-04-13 00:48:54.766241 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:48:54.766246 | orchestrator | 2026-04-13 00:48:54.766251 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-04-13 00:48:54.766285 | orchestrator | Monday 13 April 2026 00:48:23 +0000 (0:00:01.642) 0:00:08.527 ********** 2026-04-13 00:48:54.766292 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:48:54.766297 | orchestrator | 2026-04-13 00:48:54.766307 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-04-13 00:48:54.766312 | orchestrator | Monday 13 April 2026 00:48:24 +0000 (0:00:00.705) 0:00:09.233 ********** 2026-04-13 00:48:54.766317 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:48:54.766322 | orchestrator | 2026-04-13 00:48:54.766327 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-04-13 00:48:54.766332 | orchestrator | Monday 13 April 2026 00:48:25 +0000 (0:00:00.750) 0:00:09.983 ********** 2026-04-13 00:48:54.766337 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:48:54.766342 | orchestrator | 2026-04-13 00:48:54.766373 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-04-13 00:48:54.766380 | orchestrator | Monday 13 April 2026 00:48:26 +0000 (0:00:01.020) 0:00:11.003 ********** 2026-04-13 00:48:54.766385 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:48:54.766390 | orchestrator | 2026-04-13 00:48:54.766395 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-13 00:48:54.766400 | orchestrator | Monday 13 April 2026 00:48:26 +0000 (0:00:00.512) 0:00:11.516 ********** 2026-04-13 00:48:54.766405 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:48:54.766410 | orchestrator | 2026-04-13 00:48:54.766415 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-13 00:48:54.766420 | orchestrator | Monday 13 April 2026 00:48:27 +0000 (0:00:00.744) 0:00:12.261 ********** 2026-04-13 00:48:54.766448 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:48:54.766454 | orchestrator | 2026-04-13 00:48:54.766459 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-04-13 00:48:54.766464 | orchestrator | Monday 13 April 2026 00:48:28 +0000 (0:00:00.904) 0:00:13.165 ********** 2026-04-13 00:48:54.766469 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:48:54.766474 | orchestrator | 2026-04-13 00:48:54.766479 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-04-13 00:48:54.766484 | orchestrator | Monday 13 April 2026 00:48:29 +0000 (0:00:00.685) 0:00:13.851 ********** 2026-04-13 00:48:54.766489 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:48:54.766494 | orchestrator | 2026-04-13 00:48:54.766505 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-04-13 00:48:54.766510 | orchestrator | Monday 13 April 2026 00:48:29 +0000 (0:00:00.316) 0:00:14.167 ********** 2026-04-13 00:48:54.766519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-13 00:48:54.766533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-13 00:48:54.766540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-13 00:48:54.766545 | orchestrator | 2026-04-13 00:48:54.766551 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-04-13 00:48:54.766556 | orchestrator | Monday 13 April 2026 00:48:30 +0000 (0:00:01.301) 0:00:15.468 ********** 2026-04-13 00:48:54.766618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-13 00:48:54.766633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-13 00:48:54.766647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-13 00:48:54.766653 | orchestrator | 2026-04-13 00:48:54.766659 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-04-13 00:48:54.766665 | orchestrator | Monday 13 April 2026 00:48:32 +0000 (0:00:01.540) 0:00:17.009 ********** 2026-04-13 00:48:54.766671 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-13 00:48:54.766677 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-13 00:48:54.766683 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-13 00:48:54.766689 | orchestrator | 2026-04-13 00:48:54.766695 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-04-13 00:48:54.766700 | orchestrator | Monday 13 April 2026 00:48:34 +0000 (0:00:02.064) 0:00:19.074 ********** 2026-04-13 00:48:54.766706 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-13 00:48:54.766712 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-13 00:48:54.766718 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-13 00:48:54.766724 | orchestrator | 2026-04-13 00:48:54.766729 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-04-13 00:48:54.766735 | orchestrator | Monday 13 April 2026 00:48:37 +0000 (0:00:02.965) 0:00:22.040 ********** 2026-04-13 00:48:54.766741 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-13 00:48:54.766747 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-13 00:48:54.766752 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-13 00:48:54.766758 | orchestrator | 2026-04-13 00:48:54.766768 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-04-13 00:48:54.766774 | orchestrator | Monday 13 April 2026 00:48:39 +0000 (0:00:02.029) 0:00:24.069 ********** 2026-04-13 00:48:54.766780 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-13 00:48:54.766790 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-13 00:48:54.766795 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-13 00:48:54.766801 | orchestrator | 2026-04-13 00:48:54.766807 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-04-13 00:48:54.766813 | orchestrator | Monday 13 April 2026 00:48:41 +0000 (0:00:02.484) 0:00:26.553 ********** 2026-04-13 00:48:54.766819 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-13 00:48:54.766825 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-13 00:48:54.766831 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-13 00:48:54.766836 | orchestrator | 2026-04-13 00:48:54.766842 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-04-13 00:48:54.766848 | orchestrator | Monday 13 April 2026 00:48:43 +0000 (0:00:01.853) 0:00:28.406 ********** 2026-04-13 00:48:54.766854 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-13 00:48:54.766860 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-13 00:48:54.766866 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-13 00:48:54.766871 | orchestrator | 2026-04-13 00:48:54.766877 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-13 00:48:54.766883 | orchestrator | Monday 13 April 2026 00:48:45 +0000 (0:00:01.783) 0:00:30.190 ********** 2026-04-13 00:48:54.766889 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:48:54.766894 | orchestrator | 2026-04-13 00:48:54.766900 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-04-13 00:48:54.766906 | orchestrator | Monday 13 April 2026 00:48:46 +0000 (0:00:01.050) 0:00:31.240 ********** 2026-04-13 00:48:54.766915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-13 00:48:54.766923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-13 00:48:54.766937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-13 00:48:54.766944 | orchestrator | 2026-04-13 00:48:54.766949 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-04-13 00:48:54.766954 | orchestrator | Monday 13 April 2026 00:48:47 +0000 (0:00:01.468) 0:00:32.709 ********** 2026-04-13 00:48:54.766960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-13 00:48:54.766969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-13 00:48:54.766975 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:48:54.766980 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:48:54.766993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-13 00:48:54.766999 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:48:54.767024 | orchestrator | 2026-04-13 00:48:54.767030 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-04-13 00:48:54.767035 | orchestrator | Monday 13 April 2026 00:48:48 +0000 (0:00:00.503) 0:00:33.213 ********** 2026-04-13 00:48:54.767040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-13 00:48:54.767049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-13 00:48:54.767054 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:48:54.767059 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:48:54.767065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-13 00:48:54.767075 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:48:54.767080 | orchestrator | 2026-04-13 00:48:54.767085 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-04-13 00:48:54.767090 | orchestrator | Monday 13 April 2026 00:48:49 +0000 (0:00:01.115) 0:00:34.328 ********** 2026-04-13 00:48:54.767099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-13 00:48:54.767106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-13 00:48:54.767115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-13 00:48:54.767124 | orchestrator | 2026-04-13 00:48:54.767129 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-04-13 00:48:54.767134 | orchestrator | Monday 13 April 2026 00:48:50 +0000 (0:00:01.280) 0:00:35.608 ********** 2026-04-13 00:48:54.767139 | orchestrator | changed: [testbed-node-0] => { 2026-04-13 00:48:54.767144 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:48:54.767149 | orchestrator | } 2026-04-13 00:48:54.767154 | orchestrator | changed: [testbed-node-1] => { 2026-04-13 00:48:54.767159 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:48:54.767164 | orchestrator | } 2026-04-13 00:48:54.767169 | orchestrator | changed: [testbed-node-2] => { 2026-04-13 00:48:54.767174 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:48:54.767179 | orchestrator | } 2026-04-13 00:48:54.767184 | orchestrator | 2026-04-13 00:48:54.767189 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-13 00:48:54.767194 | orchestrator | Monday 13 April 2026 00:48:51 +0000 (0:00:00.418) 0:00:36.026 ********** 2026-04-13 00:48:54.767204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-13 00:48:54.767210 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:48:54.767216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-13 00:48:54.767221 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:48:54.767229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-13 00:48:54.767238 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:48:54.767243 | orchestrator | 2026-04-13 00:48:54.767249 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-04-13 00:48:54.767254 | orchestrator | Monday 13 April 2026 00:48:52 +0000 (0:00:00.959) 0:00:36.986 ********** 2026-04-13 00:48:54.767259 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:48:54.767264 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:48:54.767269 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:48:54.767274 | orchestrator | 2026-04-13 00:48:54.767279 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-04-13 00:48:54.767284 | orchestrator | Monday 13 April 2026 00:48:52 +0000 (0:00:00.795) 0:00:37.781 ********** 2026-04-13 00:48:54.767374 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Frabbitmq\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_pzmp5kyh/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_pzmp5kyh/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_pzmp5kyh/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Frabbitmq: Internal Server Error (\"unknown: repository kolla/release/2024.2/rabbitmq not found\")\\n'"} 2026-04-13 00:48:54.767389 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Frabbitmq\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_mppukev0/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_mppukev0/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_mppukev0/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Frabbitmq: Internal Server Error (\"unknown: repository kolla/release/2024.2/rabbitmq not found\")\\n'"} 2026-04-13 00:48:54.767404 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Frabbitmq\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_rd6w_tef/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_rd6w_tef/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_rd6w_tef/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Frabbitmq: Internal Server Error (\"unknown: repository kolla/release/2024.2/rabbitmq not found\")\\n'"} 2026-04-13 00:48:54.767413 | orchestrator | 2026-04-13 00:48:54.767419 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:48:54.767467 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-13 00:48:54.767480 | orchestrator | testbed-node-0 : ok=19  changed=12  unreachable=0 failed=1  skipped=9  rescued=0 ignored=0 2026-04-13 00:48:54.767492 | orchestrator | testbed-node-1 : ok=17  changed=12  unreachable=0 failed=1  skipped=3  rescued=0 ignored=0 2026-04-13 00:48:54.767500 | orchestrator | testbed-node-2 : ok=17  changed=12  unreachable=0 failed=1  skipped=3  rescued=0 ignored=0 2026-04-13 00:48:54.767508 | orchestrator | 2026-04-13 00:48:54.767517 | orchestrator | 2026-04-13 00:48:54.767522 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:48:54.767527 | orchestrator | Monday 13 April 2026 00:48:53 +0000 (0:00:00.987) 0:00:38.768 ********** 2026-04-13 00:48:54.767532 | orchestrator | =============================================================================== 2026-04-13 00:48:54.767537 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.51s 2026-04-13 00:48:54.767542 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.97s 2026-04-13 00:48:54.767548 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.48s 2026-04-13 00:48:54.767553 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.07s 2026-04-13 00:48:54.767557 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.03s 2026-04-13 00:48:54.767562 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.85s 2026-04-13 00:48:54.767567 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.78s 2026-04-13 00:48:54.767572 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.64s 2026-04-13 00:48:54.767577 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.54s 2026-04-13 00:48:54.767582 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 1.47s 2026-04-13 00:48:54.767587 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.30s 2026-04-13 00:48:54.767596 | orchestrator | service-check-containers : rabbitmq | Check containers ------------------ 1.28s 2026-04-13 00:48:54.767601 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.17s 2026-04-13 00:48:54.767606 | orchestrator | service-cert-copy : rabbitmq | Copying over backend internal TLS key ---- 1.12s 2026-04-13 00:48:54.767611 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.05s 2026-04-13 00:48:54.767616 | orchestrator | rabbitmq : Check if running RabbitMQ is at most one version behind ------ 1.02s 2026-04-13 00:48:54.767621 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.01s 2026-04-13 00:48:54.767626 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 0.99s 2026-04-13 00:48:54.767631 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.96s 2026-04-13 00:48:54.767636 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.90s 2026-04-13 00:48:54.767641 | orchestrator | 2026-04-13 00:48:54 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:48:54.768020 | orchestrator | 2026-04-13 00:48:54 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:48:54.770574 | orchestrator | 2026-04-13 00:48:54 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:48:54.770613 | orchestrator | 2026-04-13 00:48:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:48:57.812247 | orchestrator | 2026-04-13 00:48:57 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:48:57.813988 | orchestrator | 2026-04-13 00:48:57 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:48:57.816234 | orchestrator | 2026-04-13 00:48:57 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:48:57.816268 | orchestrator | 2026-04-13 00:48:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:00.854638 | orchestrator | 2026-04-13 00:49:00 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:49:00.856277 | orchestrator | 2026-04-13 00:49:00 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:49:00.858665 | orchestrator | 2026-04-13 00:49:00 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:49:00.858718 | orchestrator | 2026-04-13 00:49:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:03.893063 | orchestrator | 2026-04-13 00:49:03 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:49:03.895888 | orchestrator | 2026-04-13 00:49:03 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:49:03.898704 | orchestrator | 2026-04-13 00:49:03 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:49:03.898921 | orchestrator | 2026-04-13 00:49:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:06.945950 | orchestrator | 2026-04-13 00:49:06 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:49:06.948256 | orchestrator | 2026-04-13 00:49:06 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:49:06.950713 | orchestrator | 2026-04-13 00:49:06 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:49:06.950863 | orchestrator | 2026-04-13 00:49:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:09.990588 | orchestrator | 2026-04-13 00:49:09 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:49:09.990770 | orchestrator | 2026-04-13 00:49:09 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:49:09.991965 | orchestrator | 2026-04-13 00:49:09 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:49:09.992008 | orchestrator | 2026-04-13 00:49:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:13.033520 | orchestrator | 2026-04-13 00:49:13 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:49:13.034211 | orchestrator | 2026-04-13 00:49:13 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:49:13.036391 | orchestrator | 2026-04-13 00:49:13 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:49:13.036976 | orchestrator | 2026-04-13 00:49:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:16.077614 | orchestrator | 2026-04-13 00:49:16 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:49:16.080800 | orchestrator | 2026-04-13 00:49:16 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:49:16.082115 | orchestrator | 2026-04-13 00:49:16 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:49:16.083701 | orchestrator | 2026-04-13 00:49:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:19.121276 | orchestrator | 2026-04-13 00:49:19 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:49:19.123900 | orchestrator | 2026-04-13 00:49:19 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:49:19.123984 | orchestrator | 2026-04-13 00:49:19 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:49:19.124003 | orchestrator | 2026-04-13 00:49:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:22.163416 | orchestrator | 2026-04-13 00:49:22 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:49:22.164042 | orchestrator | 2026-04-13 00:49:22 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:49:22.165611 | orchestrator | 2026-04-13 00:49:22 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:49:22.165665 | orchestrator | 2026-04-13 00:49:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:25.202886 | orchestrator | 2026-04-13 00:49:25 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:49:25.203314 | orchestrator | 2026-04-13 00:49:25 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:49:25.204534 | orchestrator | 2026-04-13 00:49:25 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:49:25.205875 | orchestrator | 2026-04-13 00:49:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:28.244932 | orchestrator | 2026-04-13 00:49:28 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:49:28.245785 | orchestrator | 2026-04-13 00:49:28 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:49:28.248160 | orchestrator | 2026-04-13 00:49:28 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:49:28.248232 | orchestrator | 2026-04-13 00:49:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:31.288694 | orchestrator | 2026-04-13 00:49:31 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:49:31.289249 | orchestrator | 2026-04-13 00:49:31 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:49:31.290241 | orchestrator | 2026-04-13 00:49:31 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:49:31.290586 | orchestrator | 2026-04-13 00:49:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:34.335082 | orchestrator | 2026-04-13 00:49:34 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:49:34.335352 | orchestrator | 2026-04-13 00:49:34 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:49:34.336532 | orchestrator | 2026-04-13 00:49:34 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:49:34.336577 | orchestrator | 2026-04-13 00:49:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:37.383590 | orchestrator | 2026-04-13 00:49:37 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:49:37.383711 | orchestrator | 2026-04-13 00:49:37 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:49:37.386326 | orchestrator | 2026-04-13 00:49:37 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:49:37.388925 | orchestrator | 2026-04-13 00:49:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:40.454004 | orchestrator | 2026-04-13 00:49:40 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:49:40.459351 | orchestrator | 2026-04-13 00:49:40 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:49:40.465057 | orchestrator | 2026-04-13 00:49:40 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:49:40.465168 | orchestrator | 2026-04-13 00:49:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:43.504652 | orchestrator | 2026-04-13 00:49:43 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:49:43.506920 | orchestrator | 2026-04-13 00:49:43 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:49:43.508759 | orchestrator | 2026-04-13 00:49:43 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:49:43.508806 | orchestrator | 2026-04-13 00:49:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:46.544926 | orchestrator | 2026-04-13 00:49:46 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:49:46.548608 | orchestrator | 2026-04-13 00:49:46 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:49:46.550649 | orchestrator | 2026-04-13 00:49:46 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:49:46.550689 | orchestrator | 2026-04-13 00:49:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:49.589664 | orchestrator | 2026-04-13 00:49:49 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:49:49.590011 | orchestrator | 2026-04-13 00:49:49 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:49:49.597995 | orchestrator | 2026-04-13 00:49:49 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:49:49.598107 | orchestrator | 2026-04-13 00:49:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:52.639231 | orchestrator | 2026-04-13 00:49:52 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:49:52.643527 | orchestrator | 2026-04-13 00:49:52 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:49:52.644386 | orchestrator | 2026-04-13 00:49:52 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:49:52.644409 | orchestrator | 2026-04-13 00:49:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:55.674173 | orchestrator | 2026-04-13 00:49:55 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:49:55.674972 | orchestrator | 2026-04-13 00:49:55 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:49:55.675937 | orchestrator | 2026-04-13 00:49:55 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:49:55.675971 | orchestrator | 2026-04-13 00:49:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:49:58.719729 | orchestrator | 2026-04-13 00:49:58 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:49:58.719925 | orchestrator | 2026-04-13 00:49:58 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:49:58.721631 | orchestrator | 2026-04-13 00:49:58 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:49:58.721671 | orchestrator | 2026-04-13 00:49:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:01.761700 | orchestrator | 2026-04-13 00:50:01 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:50:01.763898 | orchestrator | 2026-04-13 00:50:01 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:50:01.764852 | orchestrator | 2026-04-13 00:50:01 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:50:01.764958 | orchestrator | 2026-04-13 00:50:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:04.822793 | orchestrator | 2026-04-13 00:50:04 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:50:04.825357 | orchestrator | 2026-04-13 00:50:04 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:50:04.828143 | orchestrator | 2026-04-13 00:50:04 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:50:04.828191 | orchestrator | 2026-04-13 00:50:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:07.868860 | orchestrator | 2026-04-13 00:50:07 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:50:07.869914 | orchestrator | 2026-04-13 00:50:07 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:50:07.872392 | orchestrator | 2026-04-13 00:50:07 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:50:07.872435 | orchestrator | 2026-04-13 00:50:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:10.908837 | orchestrator | 2026-04-13 00:50:10 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:50:10.909200 | orchestrator | 2026-04-13 00:50:10 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:50:10.910992 | orchestrator | 2026-04-13 00:50:10 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:50:10.911032 | orchestrator | 2026-04-13 00:50:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:13.959919 | orchestrator | 2026-04-13 00:50:13 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:50:13.962865 | orchestrator | 2026-04-13 00:50:13 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:50:13.966837 | orchestrator | 2026-04-13 00:50:13 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:50:13.966884 | orchestrator | 2026-04-13 00:50:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:17.012953 | orchestrator | 2026-04-13 00:50:17 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:50:17.015981 | orchestrator | 2026-04-13 00:50:17 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:50:17.016043 | orchestrator | 2026-04-13 00:50:17 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:50:17.016052 | orchestrator | 2026-04-13 00:50:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:20.062893 | orchestrator | 2026-04-13 00:50:20 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:50:20.063798 | orchestrator | 2026-04-13 00:50:20 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:50:20.064753 | orchestrator | 2026-04-13 00:50:20 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:50:20.064786 | orchestrator | 2026-04-13 00:50:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:23.113290 | orchestrator | 2026-04-13 00:50:23 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:50:23.116211 | orchestrator | 2026-04-13 00:50:23 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:50:23.116807 | orchestrator | 2026-04-13 00:50:23 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:50:23.117032 | orchestrator | 2026-04-13 00:50:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:26.154843 | orchestrator | 2026-04-13 00:50:26 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:50:26.156276 | orchestrator | 2026-04-13 00:50:26 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:50:26.159678 | orchestrator | 2026-04-13 00:50:26 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:50:26.160316 | orchestrator | 2026-04-13 00:50:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:29.187967 | orchestrator | 2026-04-13 00:50:29 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:50:29.188121 | orchestrator | 2026-04-13 00:50:29 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:50:29.188816 | orchestrator | 2026-04-13 00:50:29 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:50:29.188848 | orchestrator | 2026-04-13 00:50:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:32.226266 | orchestrator | 2026-04-13 00:50:32 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:50:32.228466 | orchestrator | 2026-04-13 00:50:32 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:50:32.228834 | orchestrator | 2026-04-13 00:50:32 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:50:32.228858 | orchestrator | 2026-04-13 00:50:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:35.276765 | orchestrator | 2026-04-13 00:50:35 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:50:35.277896 | orchestrator | 2026-04-13 00:50:35 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:50:35.280756 | orchestrator | 2026-04-13 00:50:35 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:50:35.280807 | orchestrator | 2026-04-13 00:50:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:38.314857 | orchestrator | 2026-04-13 00:50:38 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:50:38.316692 | orchestrator | 2026-04-13 00:50:38 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:50:38.317451 | orchestrator | 2026-04-13 00:50:38 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:50:38.317607 | orchestrator | 2026-04-13 00:50:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:41.368838 | orchestrator | 2026-04-13 00:50:41 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:50:41.369051 | orchestrator | 2026-04-13 00:50:41 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:50:41.369906 | orchestrator | 2026-04-13 00:50:41 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:50:41.369995 | orchestrator | 2026-04-13 00:50:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:44.404967 | orchestrator | 2026-04-13 00:50:44 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:50:44.405540 | orchestrator | 2026-04-13 00:50:44 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:50:44.406582 | orchestrator | 2026-04-13 00:50:44 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:50:44.406647 | orchestrator | 2026-04-13 00:50:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:47.458699 | orchestrator | 2026-04-13 00:50:47 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:50:47.460751 | orchestrator | 2026-04-13 00:50:47 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:50:47.462838 | orchestrator | 2026-04-13 00:50:47 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:50:47.462879 | orchestrator | 2026-04-13 00:50:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:50.507165 | orchestrator | 2026-04-13 00:50:50 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:50:50.507697 | orchestrator | 2026-04-13 00:50:50 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:50:50.509429 | orchestrator | 2026-04-13 00:50:50 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:50:50.509453 | orchestrator | 2026-04-13 00:50:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:53.544466 | orchestrator | 2026-04-13 00:50:53 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:50:53.546077 | orchestrator | 2026-04-13 00:50:53 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:50:53.547660 | orchestrator | 2026-04-13 00:50:53 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:50:53.547723 | orchestrator | 2026-04-13 00:50:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:56.600229 | orchestrator | 2026-04-13 00:50:56 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:50:56.600379 | orchestrator | 2026-04-13 00:50:56 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:50:56.600407 | orchestrator | 2026-04-13 00:50:56 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:50:56.600428 | orchestrator | 2026-04-13 00:50:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:50:59.634333 | orchestrator | 2026-04-13 00:50:59 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:50:59.635056 | orchestrator | 2026-04-13 00:50:59 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:50:59.635668 | orchestrator | 2026-04-13 00:50:59 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:50:59.636916 | orchestrator | 2026-04-13 00:50:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:02.697400 | orchestrator | 2026-04-13 00:51:02 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:51:02.701344 | orchestrator | 2026-04-13 00:51:02 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:51:02.703444 | orchestrator | 2026-04-13 00:51:02 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:51:02.703496 | orchestrator | 2026-04-13 00:51:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:05.751295 | orchestrator | 2026-04-13 00:51:05 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:51:05.751372 | orchestrator | 2026-04-13 00:51:05 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:51:05.751394 | orchestrator | 2026-04-13 00:51:05 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:51:05.751436 | orchestrator | 2026-04-13 00:51:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:08.786717 | orchestrator | 2026-04-13 00:51:08 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:51:08.786884 | orchestrator | 2026-04-13 00:51:08 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state STARTED 2026-04-13 00:51:08.787481 | orchestrator | 2026-04-13 00:51:08 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:51:08.787556 | orchestrator | 2026-04-13 00:51:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:11.836965 | orchestrator | 2026-04-13 00:51:11 | INFO  | Task d9a3a43d-52be-4d6f-868c-a27a41788ce9 is in state STARTED 2026-04-13 00:51:11.839500 | orchestrator | 2026-04-13 00:51:11 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:51:11.845903 | orchestrator | 2026-04-13 00:51:11.845969 | orchestrator | 2026-04-13 00:51:11.845978 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-04-13 00:51:11.845987 | orchestrator | 2026-04-13 00:51:11.845994 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-04-13 00:51:11.846002 | orchestrator | Monday 13 April 2026 00:46:23 +0000 (0:00:00.229) 0:00:00.229 ********** 2026-04-13 00:51:11.846009 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:51:11.846079 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:51:11.846087 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:51:11.846094 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:11.846100 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:11.846107 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:11.846114 | orchestrator | 2026-04-13 00:51:11.846121 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-04-13 00:51:11.846128 | orchestrator | Monday 13 April 2026 00:46:24 +0000 (0:00:00.662) 0:00:00.891 ********** 2026-04-13 00:51:11.846136 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:51:11.846144 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:51:11.846150 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:51:11.846157 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:11.846164 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:11.846170 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:11.846177 | orchestrator | 2026-04-13 00:51:11.846184 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-04-13 00:51:11.846191 | orchestrator | Monday 13 April 2026 00:46:24 +0000 (0:00:00.653) 0:00:01.545 ********** 2026-04-13 00:51:11.846197 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:51:11.846204 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:51:11.846211 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:51:11.846218 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:11.846225 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:11.846232 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:11.846239 | orchestrator | 2026-04-13 00:51:11.846246 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-04-13 00:51:11.846253 | orchestrator | Monday 13 April 2026 00:46:25 +0000 (0:00:00.627) 0:00:02.173 ********** 2026-04-13 00:51:11.846259 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:51:11.846266 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:51:11.846273 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:51:11.846279 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:51:11.846286 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:51:11.846293 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:51:11.846299 | orchestrator | 2026-04-13 00:51:11.846306 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-04-13 00:51:11.846326 | orchestrator | Monday 13 April 2026 00:46:27 +0000 (0:00:02.132) 0:00:04.305 ********** 2026-04-13 00:51:11.846351 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:51:11.846358 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:51:11.846364 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:51:11.846371 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:51:11.846377 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:51:11.846384 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:51:11.846391 | orchestrator | 2026-04-13 00:51:11.846398 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-04-13 00:51:11.846404 | orchestrator | Monday 13 April 2026 00:46:29 +0000 (0:00:01.579) 0:00:05.885 ********** 2026-04-13 00:51:11.846411 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:51:11.846418 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:51:11.846424 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:51:11.846431 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:51:11.846438 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:51:11.846444 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:51:11.846451 | orchestrator | 2026-04-13 00:51:11.846458 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-04-13 00:51:11.846464 | orchestrator | Monday 13 April 2026 00:46:32 +0000 (0:00:03.120) 0:00:09.006 ********** 2026-04-13 00:51:11.846471 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:51:11.846478 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:51:11.846485 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:51:11.846492 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:11.846500 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:11.846507 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:11.846531 | orchestrator | 2026-04-13 00:51:11.846537 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-04-13 00:51:11.846545 | orchestrator | Monday 13 April 2026 00:46:33 +0000 (0:00:01.092) 0:00:10.099 ********** 2026-04-13 00:51:11.846551 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:51:11.846557 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:51:11.846564 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:51:11.846571 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:11.846578 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:11.846585 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:11.846591 | orchestrator | 2026-04-13 00:51:11.846597 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-04-13 00:51:11.846604 | orchestrator | Monday 13 April 2026 00:46:34 +0000 (0:00:01.018) 0:00:11.118 ********** 2026-04-13 00:51:11.846610 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-13 00:51:11.846616 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-13 00:51:11.846622 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:51:11.846629 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-13 00:51:11.846635 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-13 00:51:11.846641 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:51:11.846647 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-13 00:51:11.846653 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-13 00:51:11.846659 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:51:11.846665 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-13 00:51:11.846685 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-13 00:51:11.846691 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:11.846697 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-13 00:51:11.846703 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-13 00:51:11.846710 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:11.846722 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-13 00:51:11.846728 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-13 00:51:11.846734 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:11.846741 | orchestrator | 2026-04-13 00:51:11.846746 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-04-13 00:51:11.846752 | orchestrator | Monday 13 April 2026 00:46:35 +0000 (0:00:01.019) 0:00:12.137 ********** 2026-04-13 00:51:11.846758 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:51:11.846763 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:51:11.846769 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:51:11.846775 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:11.846780 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:11.846786 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:11.846792 | orchestrator | 2026-04-13 00:51:11.846798 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-04-13 00:51:11.846807 | orchestrator | Monday 13 April 2026 00:46:36 +0000 (0:00:01.325) 0:00:13.463 ********** 2026-04-13 00:51:11.846813 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:51:11.846819 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:51:11.846825 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:51:11.846831 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:11.846837 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:11.846844 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:11.846850 | orchestrator | 2026-04-13 00:51:11.846856 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-04-13 00:51:11.846862 | orchestrator | Monday 13 April 2026 00:46:38 +0000 (0:00:01.747) 0:00:15.210 ********** 2026-04-13 00:51:11.846869 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:51:11.846875 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:51:11.846881 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:51:11.846887 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:51:11.846893 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:51:11.846899 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:51:11.846905 | orchestrator | 2026-04-13 00:51:11.846911 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-04-13 00:51:11.846918 | orchestrator | Monday 13 April 2026 00:46:45 +0000 (0:00:07.141) 0:00:22.352 ********** 2026-04-13 00:51:11.846924 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:51:11.846930 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:51:11.846935 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:11.846941 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:51:11.846947 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:11.846953 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:11.846959 | orchestrator | 2026-04-13 00:51:11.846965 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-04-13 00:51:11.846972 | orchestrator | Monday 13 April 2026 00:46:46 +0000 (0:00:01.210) 0:00:23.562 ********** 2026-04-13 00:51:11.846978 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:51:11.846984 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:51:11.846990 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:51:11.846996 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:11.847002 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:11.847008 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:11.847014 | orchestrator | 2026-04-13 00:51:11.847021 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-04-13 00:51:11.847028 | orchestrator | Monday 13 April 2026 00:46:48 +0000 (0:00:02.145) 0:00:25.707 ********** 2026-04-13 00:51:11.847035 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:51:11.847041 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:51:11.847047 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:51:11.847053 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:11.847065 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:11.847072 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:11.847078 | orchestrator | 2026-04-13 00:51:11.847084 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-04-13 00:51:11.847090 | orchestrator | Monday 13 April 2026 00:46:50 +0000 (0:00:01.384) 0:00:27.092 ********** 2026-04-13 00:51:11.847096 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-04-13 00:51:11.847103 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-04-13 00:51:11.847109 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:51:11.847115 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-04-13 00:51:11.847121 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-04-13 00:51:11.847128 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:51:11.847134 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-04-13 00:51:11.847140 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-04-13 00:51:11.847146 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-04-13 00:51:11.847152 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-04-13 00:51:11.847158 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:51:11.847164 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-04-13 00:51:11.847170 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-04-13 00:51:11.847176 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:11.847182 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:11.847188 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-04-13 00:51:11.847194 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-04-13 00:51:11.847231 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:11.847238 | orchestrator | 2026-04-13 00:51:11.847244 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-04-13 00:51:11.847257 | orchestrator | Monday 13 April 2026 00:46:51 +0000 (0:00:00.807) 0:00:27.900 ********** 2026-04-13 00:51:11.847263 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:51:11.847269 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:51:11.847275 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:51:11.847282 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:11.847288 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:11.847294 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:11.847300 | orchestrator | 2026-04-13 00:51:11.847306 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-04-13 00:51:11.847312 | orchestrator | Monday 13 April 2026 00:46:51 +0000 (0:00:00.797) 0:00:28.698 ********** 2026-04-13 00:51:11.847318 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:51:11.847324 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:51:11.847330 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:51:11.847336 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:11.847342 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:11.847348 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:11.847355 | orchestrator | 2026-04-13 00:51:11.847361 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-04-13 00:51:11.847367 | orchestrator | 2026-04-13 00:51:11.847373 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-04-13 00:51:11.847379 | orchestrator | Monday 13 April 2026 00:46:53 +0000 (0:00:01.915) 0:00:30.613 ********** 2026-04-13 00:51:11.847385 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:11.847391 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:11.847397 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:11.847404 | orchestrator | 2026-04-13 00:51:11.847410 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-04-13 00:51:11.847416 | orchestrator | Monday 13 April 2026 00:46:56 +0000 (0:00:02.175) 0:00:32.788 ********** 2026-04-13 00:51:11.847423 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:11.847428 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:11.847438 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:11.847443 | orchestrator | 2026-04-13 00:51:11.847448 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-04-13 00:51:11.847454 | orchestrator | Monday 13 April 2026 00:46:57 +0000 (0:00:01.698) 0:00:34.487 ********** 2026-04-13 00:51:11.847459 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:11.847465 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:11.847471 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:11.847478 | orchestrator | 2026-04-13 00:51:11.847483 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-04-13 00:51:11.847489 | orchestrator | Monday 13 April 2026 00:46:58 +0000 (0:00:00.939) 0:00:35.426 ********** 2026-04-13 00:51:11.847499 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:11.847506 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:11.847553 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:11.847560 | orchestrator | 2026-04-13 00:51:11.847567 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-04-13 00:51:11.847573 | orchestrator | Monday 13 April 2026 00:46:59 +0000 (0:00:00.852) 0:00:36.279 ********** 2026-04-13 00:51:11.847579 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:11.847585 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:11.847591 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:11.847598 | orchestrator | 2026-04-13 00:51:11.847604 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-04-13 00:51:11.847610 | orchestrator | Monday 13 April 2026 00:46:59 +0000 (0:00:00.319) 0:00:36.599 ********** 2026-04-13 00:51:11.847617 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:51:11.847623 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:51:11.847629 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:51:11.847635 | orchestrator | 2026-04-13 00:51:11.847642 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-04-13 00:51:11.847648 | orchestrator | Monday 13 April 2026 00:47:00 +0000 (0:00:00.883) 0:00:37.482 ********** 2026-04-13 00:51:11.847654 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:51:11.847661 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:51:11.847667 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:51:11.847673 | orchestrator | 2026-04-13 00:51:11.847679 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-04-13 00:51:11.847686 | orchestrator | Monday 13 April 2026 00:47:02 +0000 (0:00:01.968) 0:00:39.451 ********** 2026-04-13 00:51:11.847692 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:51:11.847699 | orchestrator | 2026-04-13 00:51:11.847705 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-04-13 00:51:11.847711 | orchestrator | Monday 13 April 2026 00:47:03 +0000 (0:00:01.172) 0:00:40.623 ********** 2026-04-13 00:51:11.847717 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:11.847724 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:11.847730 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:11.847736 | orchestrator | 2026-04-13 00:51:11.847742 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-04-13 00:51:11.847749 | orchestrator | Monday 13 April 2026 00:47:10 +0000 (0:00:06.372) 0:00:46.996 ********** 2026-04-13 00:51:11.847755 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:11.847761 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:11.847768 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:51:11.847774 | orchestrator | 2026-04-13 00:51:11.847780 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-04-13 00:51:11.847787 | orchestrator | Monday 13 April 2026 00:47:11 +0000 (0:00:01.700) 0:00:48.696 ********** 2026-04-13 00:51:11.847793 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:11.847799 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:11.847806 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:51:11.847812 | orchestrator | 2026-04-13 00:51:11.847818 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-04-13 00:51:11.847830 | orchestrator | Monday 13 April 2026 00:47:13 +0000 (0:00:01.998) 0:00:50.695 ********** 2026-04-13 00:51:11.847837 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:11.847843 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:11.847849 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:51:11.847855 | orchestrator | 2026-04-13 00:51:11.847862 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-04-13 00:51:11.847873 | orchestrator | Monday 13 April 2026 00:47:15 +0000 (0:00:02.048) 0:00:52.744 ********** 2026-04-13 00:51:11.847880 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:11.847886 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:11.847892 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:11.847898 | orchestrator | 2026-04-13 00:51:11.847905 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-04-13 00:51:11.847914 | orchestrator | Monday 13 April 2026 00:47:16 +0000 (0:00:00.795) 0:00:53.539 ********** 2026-04-13 00:51:11.847919 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:11.847925 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:11.847931 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:11.847937 | orchestrator | 2026-04-13 00:51:11.847943 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-04-13 00:51:11.847949 | orchestrator | Monday 13 April 2026 00:47:17 +0000 (0:00:00.502) 0:00:54.042 ********** 2026-04-13 00:51:11.847954 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:51:11.847963 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:51:11.847970 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:51:11.847975 | orchestrator | 2026-04-13 00:51:11.847981 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-04-13 00:51:11.847987 | orchestrator | Monday 13 April 2026 00:47:19 +0000 (0:00:01.893) 0:00:55.936 ********** 2026-04-13 00:51:11.847993 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:11.847999 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:11.848005 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:11.848010 | orchestrator | 2026-04-13 00:51:11.848016 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-04-13 00:51:11.848021 | orchestrator | Monday 13 April 2026 00:47:22 +0000 (0:00:02.943) 0:00:58.879 ********** 2026-04-13 00:51:11.848027 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:11.848033 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:11.848038 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:11.848044 | orchestrator | 2026-04-13 00:51:11.848050 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-04-13 00:51:11.848056 | orchestrator | Monday 13 April 2026 00:47:23 +0000 (0:00:01.274) 0:01:00.154 ********** 2026-04-13 00:51:11.848062 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-13 00:51:11.848074 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-13 00:51:11.848081 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-13 00:51:11.848088 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-13 00:51:11.848094 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-13 00:51:11.848101 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-13 00:51:11.848107 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-13 00:51:11.848120 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-13 00:51:11.848126 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-13 00:51:11.848132 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-13 00:51:11.848139 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-13 00:51:11.848145 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-13 00:51:11.848151 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:11.848158 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:11.848165 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:11.848172 | orchestrator | 2026-04-13 00:51:11.848179 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-04-13 00:51:11.848185 | orchestrator | Monday 13 April 2026 00:48:07 +0000 (0:00:44.007) 0:01:44.162 ********** 2026-04-13 00:51:11.848192 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:11.848199 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:11.848205 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:11.848212 | orchestrator | 2026-04-13 00:51:11.848219 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-04-13 00:51:11.848225 | orchestrator | Monday 13 April 2026 00:48:07 +0000 (0:00:00.356) 0:01:44.518 ********** 2026-04-13 00:51:11.848231 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:51:11.848238 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:51:11.848245 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:51:11.848251 | orchestrator | 2026-04-13 00:51:11.848257 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-04-13 00:51:11.848264 | orchestrator | Monday 13 April 2026 00:48:09 +0000 (0:00:01.487) 0:01:46.006 ********** 2026-04-13 00:51:11.848270 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:51:11.848276 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:51:11.848283 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:51:11.848289 | orchestrator | 2026-04-13 00:51:11.848301 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-04-13 00:51:11.848309 | orchestrator | Monday 13 April 2026 00:48:10 +0000 (0:00:01.574) 0:01:47.580 ********** 2026-04-13 00:51:11.848316 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:51:11.848323 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:51:11.848330 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:51:11.848337 | orchestrator | 2026-04-13 00:51:11.848344 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-04-13 00:51:11.848351 | orchestrator | Monday 13 April 2026 00:48:36 +0000 (0:00:25.326) 0:02:12.907 ********** 2026-04-13 00:51:11.848357 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:11.848364 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:11.848371 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:11.848378 | orchestrator | 2026-04-13 00:51:11.848385 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-04-13 00:51:11.848392 | orchestrator | Monday 13 April 2026 00:48:36 +0000 (0:00:00.740) 0:02:13.647 ********** 2026-04-13 00:51:11.848399 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:11.848405 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:11.848411 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:11.848418 | orchestrator | 2026-04-13 00:51:11.848424 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-04-13 00:51:11.848430 | orchestrator | Monday 13 April 2026 00:48:37 +0000 (0:00:00.907) 0:02:14.555 ********** 2026-04-13 00:51:11.848437 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:51:11.848444 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:51:11.848455 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:51:11.848462 | orchestrator | 2026-04-13 00:51:11.848468 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-04-13 00:51:11.848474 | orchestrator | Monday 13 April 2026 00:48:38 +0000 (0:00:00.779) 0:02:15.335 ********** 2026-04-13 00:51:11.848481 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:11.848487 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:11.848493 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:11.848500 | orchestrator | 2026-04-13 00:51:11.848506 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-04-13 00:51:11.848532 | orchestrator | Monday 13 April 2026 00:48:39 +0000 (0:00:01.004) 0:02:16.339 ********** 2026-04-13 00:51:11.848539 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:11.848545 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:11.848552 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:11.848558 | orchestrator | 2026-04-13 00:51:11.848564 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-04-13 00:51:11.848575 | orchestrator | Monday 13 April 2026 00:48:40 +0000 (0:00:00.436) 0:02:16.776 ********** 2026-04-13 00:51:11.848582 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:51:11.848588 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:51:11.848595 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:51:11.848601 | orchestrator | 2026-04-13 00:51:11.848607 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-04-13 00:51:11.848614 | orchestrator | Monday 13 April 2026 00:48:40 +0000 (0:00:00.906) 0:02:17.682 ********** 2026-04-13 00:51:11.848620 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:51:11.848626 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:51:11.848633 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:51:11.848639 | orchestrator | 2026-04-13 00:51:11.848645 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-04-13 00:51:11.848652 | orchestrator | Monday 13 April 2026 00:48:42 +0000 (0:00:01.287) 0:02:18.970 ********** 2026-04-13 00:51:11.848658 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:51:11.848664 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:51:11.848671 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:51:11.848677 | orchestrator | 2026-04-13 00:51:11.848684 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-04-13 00:51:11.848690 | orchestrator | Monday 13 April 2026 00:48:43 +0000 (0:00:01.062) 0:02:20.033 ********** 2026-04-13 00:51:11.848696 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:51:11.848702 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:51:11.848709 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:51:11.848715 | orchestrator | 2026-04-13 00:51:11.848721 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-04-13 00:51:11.848727 | orchestrator | Monday 13 April 2026 00:48:44 +0000 (0:00:00.873) 0:02:20.907 ********** 2026-04-13 00:51:11.848734 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:11.848740 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:11.848747 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:11.848753 | orchestrator | 2026-04-13 00:51:11.848759 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-04-13 00:51:11.848765 | orchestrator | Monday 13 April 2026 00:48:44 +0000 (0:00:00.333) 0:02:21.240 ********** 2026-04-13 00:51:11.848772 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:11.848778 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:11.848784 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:11.848790 | orchestrator | 2026-04-13 00:51:11.848797 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-04-13 00:51:11.848803 | orchestrator | Monday 13 April 2026 00:48:45 +0000 (0:00:00.552) 0:02:21.793 ********** 2026-04-13 00:51:11.848809 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:11.848816 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:11.848823 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:11.848829 | orchestrator | 2026-04-13 00:51:11.848836 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-04-13 00:51:11.848849 | orchestrator | Monday 13 April 2026 00:48:45 +0000 (0:00:00.786) 0:02:22.579 ********** 2026-04-13 00:51:11.848856 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:11.848862 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:11.848868 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:11.848875 | orchestrator | 2026-04-13 00:51:11.848882 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-04-13 00:51:11.848888 | orchestrator | Monday 13 April 2026 00:48:46 +0000 (0:00:00.766) 0:02:23.345 ********** 2026-04-13 00:51:11.848895 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-13 00:51:11.848907 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-13 00:51:11.848914 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-13 00:51:11.848920 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-13 00:51:11.848926 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-13 00:51:11.848931 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-13 00:51:11.848937 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-13 00:51:11.848944 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-13 00:51:11.848951 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-13 00:51:11.848958 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-04-13 00:51:11.848964 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-13 00:51:11.848970 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-13 00:51:11.848976 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-04-13 00:51:11.848982 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-13 00:51:11.848989 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-13 00:51:11.848995 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-13 00:51:11.849000 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-13 00:51:11.849006 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-13 00:51:11.849016 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-13 00:51:11.849022 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-13 00:51:11.849028 | orchestrator | 2026-04-13 00:51:11.849035 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-04-13 00:51:11.849041 | orchestrator | 2026-04-13 00:51:11.849047 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-04-13 00:51:11.849054 | orchestrator | Monday 13 April 2026 00:48:50 +0000 (0:00:03.506) 0:02:26.852 ********** 2026-04-13 00:51:11.849060 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:51:11.849066 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:51:11.849073 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:51:11.849080 | orchestrator | 2026-04-13 00:51:11.849086 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-04-13 00:51:11.849093 | orchestrator | Monday 13 April 2026 00:48:50 +0000 (0:00:00.294) 0:02:27.147 ********** 2026-04-13 00:51:11.849112 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:51:11.849119 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:51:11.849125 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:51:11.849132 | orchestrator | 2026-04-13 00:51:11.849138 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-04-13 00:51:11.849144 | orchestrator | Monday 13 April 2026 00:48:51 +0000 (0:00:01.452) 0:02:28.599 ********** 2026-04-13 00:51:11.849151 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:51:11.849157 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:51:11.849164 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:51:11.849170 | orchestrator | 2026-04-13 00:51:11.849177 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-04-13 00:51:11.849183 | orchestrator | Monday 13 April 2026 00:48:52 +0000 (0:00:00.289) 0:02:28.889 ********** 2026-04-13 00:51:11.849189 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:51:11.849196 | orchestrator | 2026-04-13 00:51:11.849202 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-04-13 00:51:11.849209 | orchestrator | Monday 13 April 2026 00:48:52 +0000 (0:00:00.581) 0:02:29.470 ********** 2026-04-13 00:51:11.849215 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:51:11.849221 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:51:11.849228 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:51:11.849235 | orchestrator | 2026-04-13 00:51:11.849241 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-04-13 00:51:11.849248 | orchestrator | Monday 13 April 2026 00:48:53 +0000 (0:00:00.301) 0:02:29.772 ********** 2026-04-13 00:51:11.849255 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:51:11.849262 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:51:11.849268 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:51:11.849275 | orchestrator | 2026-04-13 00:51:11.849281 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-04-13 00:51:11.849288 | orchestrator | Monday 13 April 2026 00:48:53 +0000 (0:00:00.276) 0:02:30.048 ********** 2026-04-13 00:51:11.849296 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:51:11.849302 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:51:11.849309 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:51:11.849315 | orchestrator | 2026-04-13 00:51:11.849321 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-04-13 00:51:11.849328 | orchestrator | Monday 13 April 2026 00:48:53 +0000 (0:00:00.540) 0:02:30.589 ********** 2026-04-13 00:51:11.849334 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:51:11.849341 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:51:11.849347 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:51:11.849353 | orchestrator | 2026-04-13 00:51:11.849366 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-04-13 00:51:11.849373 | orchestrator | Monday 13 April 2026 00:48:54 +0000 (0:00:00.737) 0:02:31.327 ********** 2026-04-13 00:51:11.849379 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:51:11.849386 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:51:11.849392 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:51:11.849398 | orchestrator | 2026-04-13 00:51:11.849405 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-04-13 00:51:11.849411 | orchestrator | Monday 13 April 2026 00:48:55 +0000 (0:00:01.086) 0:02:32.413 ********** 2026-04-13 00:51:11.849417 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:51:11.849424 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:51:11.849430 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:51:11.849436 | orchestrator | 2026-04-13 00:51:11.849443 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-04-13 00:51:11.849449 | orchestrator | Monday 13 April 2026 00:48:56 +0000 (0:00:01.331) 0:02:33.744 ********** 2026-04-13 00:51:11.849456 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:51:11.849462 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:51:11.849473 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:51:11.849479 | orchestrator | 2026-04-13 00:51:11.849486 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-13 00:51:11.849492 | orchestrator | 2026-04-13 00:51:11.849499 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-13 00:51:11.849505 | orchestrator | Monday 13 April 2026 00:49:07 +0000 (0:00:10.833) 0:02:44.578 ********** 2026-04-13 00:51:11.849533 | orchestrator | ok: [testbed-manager] 2026-04-13 00:51:11.849540 | orchestrator | 2026-04-13 00:51:11.849546 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-13 00:51:11.849552 | orchestrator | Monday 13 April 2026 00:49:08 +0000 (0:00:00.851) 0:02:45.429 ********** 2026-04-13 00:51:11.849558 | orchestrator | changed: [testbed-manager] 2026-04-13 00:51:11.849564 | orchestrator | 2026-04-13 00:51:11.849571 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-13 00:51:11.849577 | orchestrator | Monday 13 April 2026 00:49:09 +0000 (0:00:00.398) 0:02:45.827 ********** 2026-04-13 00:51:11.849584 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-13 00:51:11.849590 | orchestrator | 2026-04-13 00:51:11.849596 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-13 00:51:11.849603 | orchestrator | Monday 13 April 2026 00:49:09 +0000 (0:00:00.670) 0:02:46.498 ********** 2026-04-13 00:51:11.849614 | orchestrator | changed: [testbed-manager] 2026-04-13 00:51:11.849620 | orchestrator | 2026-04-13 00:51:11.849626 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-13 00:51:11.849632 | orchestrator | Monday 13 April 2026 00:49:10 +0000 (0:00:00.798) 0:02:47.297 ********** 2026-04-13 00:51:11.849639 | orchestrator | changed: [testbed-manager] 2026-04-13 00:51:11.849645 | orchestrator | 2026-04-13 00:51:11.849651 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-13 00:51:11.849658 | orchestrator | Monday 13 April 2026 00:49:11 +0000 (0:00:00.603) 0:02:47.900 ********** 2026-04-13 00:51:11.849664 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-13 00:51:11.849670 | orchestrator | 2026-04-13 00:51:11.849677 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-13 00:51:11.849683 | orchestrator | Monday 13 April 2026 00:49:13 +0000 (0:00:01.905) 0:02:49.806 ********** 2026-04-13 00:51:11.849689 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-13 00:51:11.849696 | orchestrator | 2026-04-13 00:51:11.849702 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-13 00:51:11.849708 | orchestrator | Monday 13 April 2026 00:49:14 +0000 (0:00:00.968) 0:02:50.775 ********** 2026-04-13 00:51:11.849765 | orchestrator | changed: [testbed-manager] 2026-04-13 00:51:11.849772 | orchestrator | 2026-04-13 00:51:11.849779 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-13 00:51:11.849786 | orchestrator | Monday 13 April 2026 00:49:14 +0000 (0:00:00.475) 0:02:51.250 ********** 2026-04-13 00:51:11.849793 | orchestrator | changed: [testbed-manager] 2026-04-13 00:51:11.849800 | orchestrator | 2026-04-13 00:51:11.849807 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-04-13 00:51:11.849813 | orchestrator | 2026-04-13 00:51:11.849820 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-04-13 00:51:11.849827 | orchestrator | Monday 13 April 2026 00:49:15 +0000 (0:00:00.552) 0:02:51.802 ********** 2026-04-13 00:51:11.849834 | orchestrator | ok: [testbed-manager] 2026-04-13 00:51:11.849841 | orchestrator | 2026-04-13 00:51:11.849848 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-04-13 00:51:11.849854 | orchestrator | Monday 13 April 2026 00:49:15 +0000 (0:00:00.199) 0:02:52.002 ********** 2026-04-13 00:51:11.849861 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-04-13 00:51:11.849868 | orchestrator | 2026-04-13 00:51:11.849875 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-04-13 00:51:11.849882 | orchestrator | Monday 13 April 2026 00:49:15 +0000 (0:00:00.279) 0:02:52.282 ********** 2026-04-13 00:51:11.849893 | orchestrator | ok: [testbed-manager] 2026-04-13 00:51:11.849901 | orchestrator | 2026-04-13 00:51:11.849907 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-04-13 00:51:11.849914 | orchestrator | Monday 13 April 2026 00:49:16 +0000 (0:00:01.042) 0:02:53.325 ********** 2026-04-13 00:51:11.849921 | orchestrator | ok: [testbed-manager] 2026-04-13 00:51:11.849927 | orchestrator | 2026-04-13 00:51:11.849933 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-04-13 00:51:11.849940 | orchestrator | Monday 13 April 2026 00:49:18 +0000 (0:00:01.726) 0:02:55.051 ********** 2026-04-13 00:51:11.849946 | orchestrator | changed: [testbed-manager] 2026-04-13 00:51:11.849952 | orchestrator | 2026-04-13 00:51:11.849959 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-04-13 00:51:11.849966 | orchestrator | Monday 13 April 2026 00:49:19 +0000 (0:00:01.157) 0:02:56.208 ********** 2026-04-13 00:51:11.849973 | orchestrator | ok: [testbed-manager] 2026-04-13 00:51:11.849979 | orchestrator | 2026-04-13 00:51:11.849991 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-04-13 00:51:11.849999 | orchestrator | Monday 13 April 2026 00:49:19 +0000 (0:00:00.474) 0:02:56.682 ********** 2026-04-13 00:51:11.850006 | orchestrator | changed: [testbed-manager] 2026-04-13 00:51:11.850012 | orchestrator | 2026-04-13 00:51:11.850098 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-04-13 00:51:11.850104 | orchestrator | Monday 13 April 2026 00:49:28 +0000 (0:00:08.391) 0:03:05.074 ********** 2026-04-13 00:51:11.850110 | orchestrator | changed: [testbed-manager] 2026-04-13 00:51:11.850116 | orchestrator | 2026-04-13 00:51:11.850122 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-04-13 00:51:11.850128 | orchestrator | Monday 13 April 2026 00:49:42 +0000 (0:00:14.529) 0:03:19.604 ********** 2026-04-13 00:51:11.850135 | orchestrator | ok: [testbed-manager] 2026-04-13 00:51:11.850141 | orchestrator | 2026-04-13 00:51:11.850147 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-04-13 00:51:11.850153 | orchestrator | 2026-04-13 00:51:11.850160 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-04-13 00:51:11.850166 | orchestrator | Monday 13 April 2026 00:49:43 +0000 (0:00:00.670) 0:03:20.275 ********** 2026-04-13 00:51:11.850172 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:11.850178 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:11.850184 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:11.850191 | orchestrator | 2026-04-13 00:51:11.850197 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-04-13 00:51:11.850203 | orchestrator | Monday 13 April 2026 00:49:43 +0000 (0:00:00.329) 0:03:20.604 ********** 2026-04-13 00:51:11.850209 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:11.850215 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:11.850221 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:11.850227 | orchestrator | 2026-04-13 00:51:11.850234 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-04-13 00:51:11.850240 | orchestrator | Monday 13 April 2026 00:49:44 +0000 (0:00:00.542) 0:03:21.147 ********** 2026-04-13 00:51:11.850246 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:51:11.850253 | orchestrator | 2026-04-13 00:51:11.850259 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-04-13 00:51:11.850265 | orchestrator | Monday 13 April 2026 00:49:44 +0000 (0:00:00.438) 0:03:21.585 ********** 2026-04-13 00:51:11.850275 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-13 00:51:11.850282 | orchestrator | 2026-04-13 00:51:11.850288 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-04-13 00:51:11.850294 | orchestrator | Monday 13 April 2026 00:49:45 +0000 (0:00:00.693) 0:03:22.279 ********** 2026-04-13 00:51:11.850301 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-13 00:51:11.850312 | orchestrator | 2026-04-13 00:51:11.850318 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-04-13 00:51:11.850324 | orchestrator | Monday 13 April 2026 00:49:46 +0000 (0:00:00.804) 0:03:23.084 ********** 2026-04-13 00:51:11.850330 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:11.850336 | orchestrator | 2026-04-13 00:51:11.850343 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-04-13 00:51:11.850349 | orchestrator | Monday 13 April 2026 00:49:46 +0000 (0:00:00.128) 0:03:23.212 ********** 2026-04-13 00:51:11.850355 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-13 00:51:11.850361 | orchestrator | 2026-04-13 00:51:11.850367 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-04-13 00:51:11.850374 | orchestrator | Monday 13 April 2026 00:49:47 +0000 (0:00:01.021) 0:03:24.233 ********** 2026-04-13 00:51:11.850380 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:11.850386 | orchestrator | 2026-04-13 00:51:11.850392 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-04-13 00:51:11.850398 | orchestrator | Monday 13 April 2026 00:49:47 +0000 (0:00:00.120) 0:03:24.354 ********** 2026-04-13 00:51:11.850405 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:11.850411 | orchestrator | 2026-04-13 00:51:11.850417 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-04-13 00:51:11.850423 | orchestrator | Monday 13 April 2026 00:49:47 +0000 (0:00:00.138) 0:03:24.492 ********** 2026-04-13 00:51:11.850429 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:11.850436 | orchestrator | 2026-04-13 00:51:11.850442 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-04-13 00:51:11.850448 | orchestrator | Monday 13 April 2026 00:49:48 +0000 (0:00:00.362) 0:03:24.855 ********** 2026-04-13 00:51:11.850454 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:11.850460 | orchestrator | 2026-04-13 00:51:11.850466 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-04-13 00:51:11.850472 | orchestrator | Monday 13 April 2026 00:49:48 +0000 (0:00:00.128) 0:03:24.984 ********** 2026-04-13 00:51:11.850479 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-13 00:51:11.850485 | orchestrator | 2026-04-13 00:51:11.850491 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-04-13 00:51:11.850497 | orchestrator | Monday 13 April 2026 00:49:52 +0000 (0:00:04.662) 0:03:29.646 ********** 2026-04-13 00:51:11.850503 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-04-13 00:51:11.850522 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-04-13 00:51:11.850529 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-04-13 00:51:11.850535 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-04-13 00:51:11.850541 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-04-13 00:51:11.850546 | orchestrator | 2026-04-13 00:51:11.850551 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-04-13 00:51:11.850556 | orchestrator | Monday 13 April 2026 00:50:39 +0000 (0:00:47.092) 0:04:16.739 ********** 2026-04-13 00:51:11.850567 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-13 00:51:11.850572 | orchestrator | 2026-04-13 00:51:11.850578 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-04-13 00:51:11.850584 | orchestrator | Monday 13 April 2026 00:50:41 +0000 (0:00:01.604) 0:04:18.343 ********** 2026-04-13 00:51:11.850589 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-13 00:51:11.850596 | orchestrator | 2026-04-13 00:51:11.850602 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-04-13 00:51:11.850608 | orchestrator | Monday 13 April 2026 00:50:43 +0000 (0:00:01.780) 0:04:20.123 ********** 2026-04-13 00:51:11.850614 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-13 00:51:11.850620 | orchestrator | 2026-04-13 00:51:11.850633 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-04-13 00:51:11.850639 | orchestrator | Monday 13 April 2026 00:50:44 +0000 (0:00:01.223) 0:04:21.346 ********** 2026-04-13 00:51:11.850645 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:11.850651 | orchestrator | 2026-04-13 00:51:11.850658 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-04-13 00:51:11.850664 | orchestrator | Monday 13 April 2026 00:50:44 +0000 (0:00:00.125) 0:04:21.472 ********** 2026-04-13 00:51:11.850670 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-04-13 00:51:11.850676 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-04-13 00:51:11.850682 | orchestrator | 2026-04-13 00:51:11.850688 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-04-13 00:51:11.850695 | orchestrator | Monday 13 April 2026 00:50:46 +0000 (0:00:02.153) 0:04:23.626 ********** 2026-04-13 00:51:11.850701 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:11.850707 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:11.850713 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:11.850719 | orchestrator | 2026-04-13 00:51:11.850726 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-04-13 00:51:11.850732 | orchestrator | Monday 13 April 2026 00:50:47 +0000 (0:00:00.364) 0:04:23.990 ********** 2026-04-13 00:51:11.850738 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:11.850744 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:11.850750 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:11.850756 | orchestrator | 2026-04-13 00:51:11.850762 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-04-13 00:51:11.850768 | orchestrator | 2026-04-13 00:51:11.850778 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-04-13 00:51:11.850784 | orchestrator | Monday 13 April 2026 00:50:48 +0000 (0:00:01.228) 0:04:25.219 ********** 2026-04-13 00:51:11.850790 | orchestrator | ok: [testbed-manager] 2026-04-13 00:51:11.850795 | orchestrator | 2026-04-13 00:51:11.850801 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-04-13 00:51:11.850807 | orchestrator | Monday 13 April 2026 00:50:48 +0000 (0:00:00.156) 0:04:25.376 ********** 2026-04-13 00:51:11.850814 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-04-13 00:51:11.850820 | orchestrator | 2026-04-13 00:51:11.850826 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-04-13 00:51:11.850832 | orchestrator | Monday 13 April 2026 00:50:48 +0000 (0:00:00.210) 0:04:25.587 ********** 2026-04-13 00:51:11.850839 | orchestrator | changed: [testbed-manager] 2026-04-13 00:51:11.850845 | orchestrator | 2026-04-13 00:51:11.850851 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-04-13 00:51:11.850858 | orchestrator | 2026-04-13 00:51:11.850864 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-04-13 00:51:11.850871 | orchestrator | Monday 13 April 2026 00:50:55 +0000 (0:00:06.863) 0:04:32.450 ********** 2026-04-13 00:51:11.850877 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:51:11.850883 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:51:11.850889 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:51:11.850896 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:51:11.850902 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:51:11.850908 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:51:11.850915 | orchestrator | 2026-04-13 00:51:11.850921 | orchestrator | TASK [Manage labels] *********************************************************** 2026-04-13 00:51:11.850926 | orchestrator | Monday 13 April 2026 00:50:56 +0000 (0:00:00.954) 0:04:33.404 ********** 2026-04-13 00:51:11.850932 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-13 00:51:11.850938 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-13 00:51:11.850945 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-13 00:51:11.850956 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-13 00:51:11.850962 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-13 00:51:11.850969 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-13 00:51:11.850975 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-13 00:51:11.850981 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-13 00:51:11.850987 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-13 00:51:11.850994 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-13 00:51:11.851000 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-13 00:51:11.851006 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-13 00:51:11.851020 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-13 00:51:11.851026 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-13 00:51:11.851032 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-13 00:51:11.851038 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-13 00:51:11.851045 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-13 00:51:11.851050 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-13 00:51:11.851056 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-13 00:51:11.851062 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-13 00:51:11.851068 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-13 00:51:11.851075 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-13 00:51:11.851082 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-13 00:51:11.851093 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-13 00:51:11.851099 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-13 00:51:11.851105 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-13 00:51:11.851111 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-13 00:51:11.851118 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-13 00:51:11.851124 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-13 00:51:11.851131 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-13 00:51:11.851137 | orchestrator | 2026-04-13 00:51:11.851143 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-04-13 00:51:11.851153 | orchestrator | Monday 13 April 2026 00:51:09 +0000 (0:00:12.443) 0:04:45.848 ********** 2026-04-13 00:51:11.851160 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:51:11.851166 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:51:11.851173 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:51:11.851179 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:11.851186 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:11.851193 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:11.851199 | orchestrator | 2026-04-13 00:51:11.851206 | orchestrator | TASK [Manage taints] *********************************************************** 2026-04-13 00:51:11.851217 | orchestrator | Monday 13 April 2026 00:51:09 +0000 (0:00:00.641) 0:04:46.490 ********** 2026-04-13 00:51:11.851224 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:51:11.851230 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:51:11.851236 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:51:11.851242 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:51:11.851249 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:51:11.851255 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:51:11.851261 | orchestrator | 2026-04-13 00:51:11.851267 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:51:11.851274 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:51:11.851283 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-13 00:51:11.851291 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-13 00:51:11.851297 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-13 00:51:11.851304 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-13 00:51:11.851311 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-13 00:51:11.851317 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-13 00:51:11.851324 | orchestrator | 2026-04-13 00:51:11.851331 | orchestrator | 2026-04-13 00:51:11.851337 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:51:11.851344 | orchestrator | Monday 13 April 2026 00:51:10 +0000 (0:00:00.438) 0:04:46.929 ********** 2026-04-13 00:51:11.851350 | orchestrator | =============================================================================== 2026-04-13 00:51:11.851357 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 47.09s 2026-04-13 00:51:11.851363 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 44.01s 2026-04-13 00:51:11.851370 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.33s 2026-04-13 00:51:11.851380 | orchestrator | kubectl : Install required packages ------------------------------------ 14.53s 2026-04-13 00:51:11.851386 | orchestrator | Manage labels ---------------------------------------------------------- 12.44s 2026-04-13 00:51:11.851393 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.83s 2026-04-13 00:51:11.851399 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 8.39s 2026-04-13 00:51:11.851406 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 7.14s 2026-04-13 00:51:11.851413 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.86s 2026-04-13 00:51:11.851419 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 6.37s 2026-04-13 00:51:11.851425 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.66s 2026-04-13 00:51:11.851432 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.51s 2026-04-13 00:51:11.851438 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 3.12s 2026-04-13 00:51:11.851445 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.94s 2026-04-13 00:51:11.851451 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 2.18s 2026-04-13 00:51:11.851466 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.15s 2026-04-13 00:51:11.851474 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.15s 2026-04-13 00:51:11.851480 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.13s 2026-04-13 00:51:11.851487 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.05s 2026-04-13 00:51:11.851494 | orchestrator | k3s_server : Download vip rbac manifest to first master ----------------- 2.00s 2026-04-13 00:51:11.851500 | orchestrator | 2026-04-13 00:51:11 | INFO  | Task 8dc0549e-424c-4d95-9615-f0d2c94378e7 is in state SUCCESS 2026-04-13 00:51:11.851507 | orchestrator | 2026-04-13 00:51:11 | INFO  | Task 8c3d9f39-b361-4ef8-b157-acf260933ad6 is in state STARTED 2026-04-13 00:51:11.851539 | orchestrator | 2026-04-13 00:51:11 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:51:11.851546 | orchestrator | 2026-04-13 00:51:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:14.899225 | orchestrator | 2026-04-13 00:51:14 | INFO  | Task d9a3a43d-52be-4d6f-868c-a27a41788ce9 is in state STARTED 2026-04-13 00:51:14.902284 | orchestrator | 2026-04-13 00:51:14 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:51:14.904209 | orchestrator | 2026-04-13 00:51:14 | INFO  | Task 8c3d9f39-b361-4ef8-b157-acf260933ad6 is in state STARTED 2026-04-13 00:51:14.905787 | orchestrator | 2026-04-13 00:51:14 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:51:14.906161 | orchestrator | 2026-04-13 00:51:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:17.949116 | orchestrator | 2026-04-13 00:51:17 | INFO  | Task d9a3a43d-52be-4d6f-868c-a27a41788ce9 is in state STARTED 2026-04-13 00:51:17.949233 | orchestrator | 2026-04-13 00:51:17 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:51:17.950090 | orchestrator | 2026-04-13 00:51:17 | INFO  | Task 8c3d9f39-b361-4ef8-b157-acf260933ad6 is in state SUCCESS 2026-04-13 00:51:17.950729 | orchestrator | 2026-04-13 00:51:17 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:51:17.950761 | orchestrator | 2026-04-13 00:51:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:20.989648 | orchestrator | 2026-04-13 00:51:20 | INFO  | Task d9a3a43d-52be-4d6f-868c-a27a41788ce9 is in state STARTED 2026-04-13 00:51:20.990386 | orchestrator | 2026-04-13 00:51:20 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:51:20.992105 | orchestrator | 2026-04-13 00:51:20 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:51:20.992150 | orchestrator | 2026-04-13 00:51:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:24.028589 | orchestrator | 2026-04-13 00:51:24 | INFO  | Task d9a3a43d-52be-4d6f-868c-a27a41788ce9 is in state SUCCESS 2026-04-13 00:51:24.028675 | orchestrator | 2026-04-13 00:51:24 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:51:24.030075 | orchestrator | 2026-04-13 00:51:24 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:51:24.030130 | orchestrator | 2026-04-13 00:51:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:27.070553 | orchestrator | 2026-04-13 00:51:27 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:51:27.071756 | orchestrator | 2026-04-13 00:51:27 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:51:27.071884 | orchestrator | 2026-04-13 00:51:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:30.118862 | orchestrator | 2026-04-13 00:51:30 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:51:30.122263 | orchestrator | 2026-04-13 00:51:30 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:51:30.122333 | orchestrator | 2026-04-13 00:51:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:33.169090 | orchestrator | 2026-04-13 00:51:33 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:51:33.170605 | orchestrator | 2026-04-13 00:51:33 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:51:33.171165 | orchestrator | 2026-04-13 00:51:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:36.227877 | orchestrator | 2026-04-13 00:51:36 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:51:36.230006 | orchestrator | 2026-04-13 00:51:36 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:51:36.230081 | orchestrator | 2026-04-13 00:51:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:39.277718 | orchestrator | 2026-04-13 00:51:39 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:51:39.280639 | orchestrator | 2026-04-13 00:51:39 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:51:39.280681 | orchestrator | 2026-04-13 00:51:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:42.309099 | orchestrator | 2026-04-13 00:51:42 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:51:42.309875 | orchestrator | 2026-04-13 00:51:42 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:51:42.309904 | orchestrator | 2026-04-13 00:51:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:45.353764 | orchestrator | 2026-04-13 00:51:45 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:51:45.355273 | orchestrator | 2026-04-13 00:51:45 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:51:45.355301 | orchestrator | 2026-04-13 00:51:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:48.405370 | orchestrator | 2026-04-13 00:51:48 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:51:48.407489 | orchestrator | 2026-04-13 00:51:48 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:51:48.407555 | orchestrator | 2026-04-13 00:51:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:51.451958 | orchestrator | 2026-04-13 00:51:51 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:51:51.457843 | orchestrator | 2026-04-13 00:51:51 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:51:51.458456 | orchestrator | 2026-04-13 00:51:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:54.499225 | orchestrator | 2026-04-13 00:51:54 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:51:54.501967 | orchestrator | 2026-04-13 00:51:54 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:51:54.502082 | orchestrator | 2026-04-13 00:51:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:51:57.557865 | orchestrator | 2026-04-13 00:51:57 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:51:57.559929 | orchestrator | 2026-04-13 00:51:57 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:51:57.560039 | orchestrator | 2026-04-13 00:51:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:00.614891 | orchestrator | 2026-04-13 00:52:00 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:52:00.617057 | orchestrator | 2026-04-13 00:52:00 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:52:00.617377 | orchestrator | 2026-04-13 00:52:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:03.671012 | orchestrator | 2026-04-13 00:52:03 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:52:03.671113 | orchestrator | 2026-04-13 00:52:03 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:52:03.671128 | orchestrator | 2026-04-13 00:52:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:06.726316 | orchestrator | 2026-04-13 00:52:06 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:52:06.728778 | orchestrator | 2026-04-13 00:52:06 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:52:06.728856 | orchestrator | 2026-04-13 00:52:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:09.781036 | orchestrator | 2026-04-13 00:52:09 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:52:09.781151 | orchestrator | 2026-04-13 00:52:09 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:52:09.781166 | orchestrator | 2026-04-13 00:52:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:12.825179 | orchestrator | 2026-04-13 00:52:12 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:52:12.825625 | orchestrator | 2026-04-13 00:52:12 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:52:12.825670 | orchestrator | 2026-04-13 00:52:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:15.858225 | orchestrator | 2026-04-13 00:52:15 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:52:15.859343 | orchestrator | 2026-04-13 00:52:15 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:52:15.859688 | orchestrator | 2026-04-13 00:52:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:18.892960 | orchestrator | 2026-04-13 00:52:18 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:52:18.895068 | orchestrator | 2026-04-13 00:52:18 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:52:18.895127 | orchestrator | 2026-04-13 00:52:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:21.949726 | orchestrator | 2026-04-13 00:52:21 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:52:21.952379 | orchestrator | 2026-04-13 00:52:21 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:52:21.952609 | orchestrator | 2026-04-13 00:52:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:25.012234 | orchestrator | 2026-04-13 00:52:25 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:52:25.022105 | orchestrator | 2026-04-13 00:52:25 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:52:25.022236 | orchestrator | 2026-04-13 00:52:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:28.054356 | orchestrator | 2026-04-13 00:52:28 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:52:28.058426 | orchestrator | 2026-04-13 00:52:28 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:52:28.058563 | orchestrator | 2026-04-13 00:52:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:31.095102 | orchestrator | 2026-04-13 00:52:31 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:52:31.095194 | orchestrator | 2026-04-13 00:52:31 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:52:31.095211 | orchestrator | 2026-04-13 00:52:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:34.132693 | orchestrator | 2026-04-13 00:52:34 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:52:34.133684 | orchestrator | 2026-04-13 00:52:34 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:52:34.133774 | orchestrator | 2026-04-13 00:52:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:37.177301 | orchestrator | 2026-04-13 00:52:37 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:52:37.177452 | orchestrator | 2026-04-13 00:52:37 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:52:37.177470 | orchestrator | 2026-04-13 00:52:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:40.209209 | orchestrator | 2026-04-13 00:52:40 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:52:40.209947 | orchestrator | 2026-04-13 00:52:40 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:52:40.209992 | orchestrator | 2026-04-13 00:52:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:43.247342 | orchestrator | 2026-04-13 00:52:43 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:52:43.252305 | orchestrator | 2026-04-13 00:52:43 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:52:43.252383 | orchestrator | 2026-04-13 00:52:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:46.298324 | orchestrator | 2026-04-13 00:52:46 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:52:46.300151 | orchestrator | 2026-04-13 00:52:46 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:52:46.300383 | orchestrator | 2026-04-13 00:52:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:49.348952 | orchestrator | 2026-04-13 00:52:49 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:52:49.349069 | orchestrator | 2026-04-13 00:52:49 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:52:49.349090 | orchestrator | 2026-04-13 00:52:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:52.403148 | orchestrator | 2026-04-13 00:52:52 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:52:52.405873 | orchestrator | 2026-04-13 00:52:52 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:52:52.405937 | orchestrator | 2026-04-13 00:52:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:55.443242 | orchestrator | 2026-04-13 00:52:55 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:52:55.444719 | orchestrator | 2026-04-13 00:52:55 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:52:55.444781 | orchestrator | 2026-04-13 00:52:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:52:58.503064 | orchestrator | 2026-04-13 00:52:58 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:52:58.504069 | orchestrator | 2026-04-13 00:52:58 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:52:58.504107 | orchestrator | 2026-04-13 00:52:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:01.555387 | orchestrator | 2026-04-13 00:53:01 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:53:01.556958 | orchestrator | 2026-04-13 00:53:01 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:53:01.556997 | orchestrator | 2026-04-13 00:53:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:04.598097 | orchestrator | 2026-04-13 00:53:04 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:53:04.599853 | orchestrator | 2026-04-13 00:53:04 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:53:04.599964 | orchestrator | 2026-04-13 00:53:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:07.640967 | orchestrator | 2026-04-13 00:53:07 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:53:07.642532 | orchestrator | 2026-04-13 00:53:07 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:53:07.642571 | orchestrator | 2026-04-13 00:53:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:10.689658 | orchestrator | 2026-04-13 00:53:10 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:53:10.690963 | orchestrator | 2026-04-13 00:53:10 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:53:10.691010 | orchestrator | 2026-04-13 00:53:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:13.739579 | orchestrator | 2026-04-13 00:53:13 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:53:13.739686 | orchestrator | 2026-04-13 00:53:13 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:53:13.739777 | orchestrator | 2026-04-13 00:53:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:16.793584 | orchestrator | 2026-04-13 00:53:16 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:53:16.795073 | orchestrator | 2026-04-13 00:53:16 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:53:16.795280 | orchestrator | 2026-04-13 00:53:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:19.850887 | orchestrator | 2026-04-13 00:53:19 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:53:19.852572 | orchestrator | 2026-04-13 00:53:19 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:53:19.853477 | orchestrator | 2026-04-13 00:53:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:22.912394 | orchestrator | 2026-04-13 00:53:22 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:53:22.914620 | orchestrator | 2026-04-13 00:53:22 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:53:22.914694 | orchestrator | 2026-04-13 00:53:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:25.971116 | orchestrator | 2026-04-13 00:53:25 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:53:25.973780 | orchestrator | 2026-04-13 00:53:25 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:53:25.973866 | orchestrator | 2026-04-13 00:53:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:29.020989 | orchestrator | 2026-04-13 00:53:29 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:53:29.022853 | orchestrator | 2026-04-13 00:53:29 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:53:29.022910 | orchestrator | 2026-04-13 00:53:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:32.066197 | orchestrator | 2026-04-13 00:53:32 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:53:32.067655 | orchestrator | 2026-04-13 00:53:32 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:53:32.067701 | orchestrator | 2026-04-13 00:53:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:35.130992 | orchestrator | 2026-04-13 00:53:35 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:53:35.132189 | orchestrator | 2026-04-13 00:53:35 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:53:35.132273 | orchestrator | 2026-04-13 00:53:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:38.174804 | orchestrator | 2026-04-13 00:53:38 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:53:38.174962 | orchestrator | 2026-04-13 00:53:38 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:53:38.174982 | orchestrator | 2026-04-13 00:53:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:41.201186 | orchestrator | 2026-04-13 00:53:41 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:53:41.202291 | orchestrator | 2026-04-13 00:53:41 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:53:41.202329 | orchestrator | 2026-04-13 00:53:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:44.245031 | orchestrator | 2026-04-13 00:53:44 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:53:44.247630 | orchestrator | 2026-04-13 00:53:44 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:53:44.247679 | orchestrator | 2026-04-13 00:53:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:47.293302 | orchestrator | 2026-04-13 00:53:47 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:53:47.293714 | orchestrator | 2026-04-13 00:53:47 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:53:47.293753 | orchestrator | 2026-04-13 00:53:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:50.335201 | orchestrator | 2026-04-13 00:53:50 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:53:50.335668 | orchestrator | 2026-04-13 00:53:50 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:53:50.335823 | orchestrator | 2026-04-13 00:53:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:53.378150 | orchestrator | 2026-04-13 00:53:53 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:53:53.382630 | orchestrator | 2026-04-13 00:53:53 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:53:53.382647 | orchestrator | 2026-04-13 00:53:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:56.414200 | orchestrator | 2026-04-13 00:53:56 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:53:56.416918 | orchestrator | 2026-04-13 00:53:56 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state STARTED 2026-04-13 00:53:56.417081 | orchestrator | 2026-04-13 00:53:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:59.457831 | orchestrator | 2026-04-13 00:53:59 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:53:59.463009 | orchestrator | 2026-04-13 00:53:59 | INFO  | Task 71f517f5-ef77-491e-a6cd-5235fbb4ae6b is in state SUCCESS 2026-04-13 00:53:59.465147 | orchestrator | 2026-04-13 00:53:59.465193 | orchestrator | 2026-04-13 00:53:59.465200 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-04-13 00:53:59.465206 | orchestrator | 2026-04-13 00:53:59.465211 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-13 00:53:59.465216 | orchestrator | Monday 13 April 2026 00:51:14 +0000 (0:00:00.327) 0:00:00.327 ********** 2026-04-13 00:53:59.465221 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-13 00:53:59.465226 | orchestrator | 2026-04-13 00:53:59.465231 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-13 00:53:59.465235 | orchestrator | Monday 13 April 2026 00:51:15 +0000 (0:00:01.170) 0:00:01.498 ********** 2026-04-13 00:53:59.465239 | orchestrator | changed: [testbed-manager] 2026-04-13 00:53:59.465244 | orchestrator | 2026-04-13 00:53:59.465247 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-04-13 00:53:59.465251 | orchestrator | Monday 13 April 2026 00:51:17 +0000 (0:00:01.656) 0:00:03.154 ********** 2026-04-13 00:53:59.465255 | orchestrator | changed: [testbed-manager] 2026-04-13 00:53:59.465259 | orchestrator | 2026-04-13 00:53:59.465263 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:53:59.465267 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:53:59.465272 | orchestrator | 2026-04-13 00:53:59.465276 | orchestrator | 2026-04-13 00:53:59.465280 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:53:59.465284 | orchestrator | Monday 13 April 2026 00:51:17 +0000 (0:00:00.513) 0:00:03.668 ********** 2026-04-13 00:53:59.465288 | orchestrator | =============================================================================== 2026-04-13 00:53:59.465303 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.66s 2026-04-13 00:53:59.465307 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.17s 2026-04-13 00:53:59.465311 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.51s 2026-04-13 00:53:59.465315 | orchestrator | 2026-04-13 00:53:59.465319 | orchestrator | 2026-04-13 00:53:59.465323 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-13 00:53:59.465326 | orchestrator | 2026-04-13 00:53:59.465331 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-13 00:53:59.465335 | orchestrator | Monday 13 April 2026 00:51:14 +0000 (0:00:00.280) 0:00:00.280 ********** 2026-04-13 00:53:59.465339 | orchestrator | ok: [testbed-manager] 2026-04-13 00:53:59.465344 | orchestrator | 2026-04-13 00:53:59.465348 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-13 00:53:59.465352 | orchestrator | Monday 13 April 2026 00:51:15 +0000 (0:00:00.846) 0:00:01.126 ********** 2026-04-13 00:53:59.465356 | orchestrator | ok: [testbed-manager] 2026-04-13 00:53:59.465359 | orchestrator | 2026-04-13 00:53:59.465363 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-13 00:53:59.465367 | orchestrator | Monday 13 April 2026 00:51:15 +0000 (0:00:00.685) 0:00:01.811 ********** 2026-04-13 00:53:59.465371 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-13 00:53:59.465375 | orchestrator | 2026-04-13 00:53:59.465379 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-13 00:53:59.465382 | orchestrator | Monday 13 April 2026 00:51:16 +0000 (0:00:01.067) 0:00:02.878 ********** 2026-04-13 00:53:59.465401 | orchestrator | changed: [testbed-manager] 2026-04-13 00:53:59.465405 | orchestrator | 2026-04-13 00:53:59.465408 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-13 00:53:59.465412 | orchestrator | Monday 13 April 2026 00:51:18 +0000 (0:00:01.338) 0:00:04.217 ********** 2026-04-13 00:53:59.465416 | orchestrator | changed: [testbed-manager] 2026-04-13 00:53:59.465420 | orchestrator | 2026-04-13 00:53:59.465423 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-13 00:53:59.465427 | orchestrator | Monday 13 April 2026 00:51:18 +0000 (0:00:00.647) 0:00:04.864 ********** 2026-04-13 00:53:59.465443 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-13 00:53:59.465447 | orchestrator | 2026-04-13 00:53:59.465450 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-13 00:53:59.465454 | orchestrator | Monday 13 April 2026 00:51:20 +0000 (0:00:01.927) 0:00:06.792 ********** 2026-04-13 00:53:59.465458 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-13 00:53:59.465462 | orchestrator | 2026-04-13 00:53:59.465465 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-13 00:53:59.465469 | orchestrator | Monday 13 April 2026 00:51:21 +0000 (0:00:00.981) 0:00:07.774 ********** 2026-04-13 00:53:59.465473 | orchestrator | ok: [testbed-manager] 2026-04-13 00:53:59.465477 | orchestrator | 2026-04-13 00:53:59.465480 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-13 00:53:59.465507 | orchestrator | Monday 13 April 2026 00:51:22 +0000 (0:00:00.443) 0:00:08.217 ********** 2026-04-13 00:53:59.465512 | orchestrator | ok: [testbed-manager] 2026-04-13 00:53:59.465515 | orchestrator | 2026-04-13 00:53:59.465519 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:53:59.465523 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:53:59.465527 | orchestrator | 2026-04-13 00:53:59.465530 | orchestrator | 2026-04-13 00:53:59.465534 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:53:59.465538 | orchestrator | Monday 13 April 2026 00:51:22 +0000 (0:00:00.321) 0:00:08.539 ********** 2026-04-13 00:53:59.465542 | orchestrator | =============================================================================== 2026-04-13 00:53:59.465545 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.93s 2026-04-13 00:53:59.465549 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.34s 2026-04-13 00:53:59.465553 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.07s 2026-04-13 00:53:59.465565 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.98s 2026-04-13 00:53:59.465570 | orchestrator | Get home directory of operator user ------------------------------------- 0.85s 2026-04-13 00:53:59.465584 | orchestrator | Create .kube directory -------------------------------------------------- 0.69s 2026-04-13 00:53:59.465589 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.65s 2026-04-13 00:53:59.465596 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.44s 2026-04-13 00:53:59.465609 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.32s 2026-04-13 00:53:59.465615 | orchestrator | 2026-04-13 00:53:59.465620 | orchestrator | 2026-04-13 00:53:59.465626 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 00:53:59.465633 | orchestrator | 2026-04-13 00:53:59.465638 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 00:53:59.465643 | orchestrator | Monday 13 April 2026 00:47:54 +0000 (0:00:00.591) 0:00:00.591 ********** 2026-04-13 00:53:59.465648 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:53:59.465654 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:53:59.465660 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:53:59.465665 | orchestrator | 2026-04-13 00:53:59.465671 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 00:53:59.465687 | orchestrator | Monday 13 April 2026 00:47:54 +0000 (0:00:00.424) 0:00:01.016 ********** 2026-04-13 00:53:59.465693 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-04-13 00:53:59.465699 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-04-13 00:53:59.465706 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-04-13 00:53:59.465712 | orchestrator | 2026-04-13 00:53:59.465718 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-04-13 00:53:59.465724 | orchestrator | 2026-04-13 00:53:59.465735 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-13 00:53:59.465741 | orchestrator | Monday 13 April 2026 00:47:55 +0000 (0:00:00.645) 0:00:01.662 ********** 2026-04-13 00:53:59.465747 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:53:59.465753 | orchestrator | 2026-04-13 00:53:59.465759 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-04-13 00:53:59.465762 | orchestrator | Monday 13 April 2026 00:47:56 +0000 (0:00:00.989) 0:00:02.651 ********** 2026-04-13 00:53:59.465766 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:53:59.465770 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:53:59.465817 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:53:59.465822 | orchestrator | 2026-04-13 00:53:59.465826 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-13 00:53:59.465830 | orchestrator | Monday 13 April 2026 00:47:57 +0000 (0:00:01.679) 0:00:04.331 ********** 2026-04-13 00:53:59.465834 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:53:59.465838 | orchestrator | 2026-04-13 00:53:59.465841 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-04-13 00:53:59.465845 | orchestrator | Monday 13 April 2026 00:47:58 +0000 (0:00:00.801) 0:00:05.133 ********** 2026-04-13 00:53:59.465849 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:53:59.465853 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:53:59.465856 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:53:59.465860 | orchestrator | 2026-04-13 00:53:59.465864 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-04-13 00:53:59.465867 | orchestrator | Monday 13 April 2026 00:47:59 +0000 (0:00:01.326) 0:00:06.459 ********** 2026-04-13 00:53:59.465871 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-13 00:53:59.465875 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-13 00:53:59.465879 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-13 00:53:59.465883 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-13 00:53:59.465886 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-13 00:53:59.465890 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-13 00:53:59.465894 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-13 00:53:59.465899 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-13 00:53:59.465902 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-13 00:53:59.465906 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-13 00:53:59.465910 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-13 00:53:59.465914 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-13 00:53:59.465917 | orchestrator | 2026-04-13 00:53:59.465921 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-13 00:53:59.465930 | orchestrator | Monday 13 April 2026 00:48:03 +0000 (0:00:03.123) 0:00:09.583 ********** 2026-04-13 00:53:59.465933 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-13 00:53:59.465937 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-13 00:53:59.465941 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-13 00:53:59.465945 | orchestrator | 2026-04-13 00:53:59.465949 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-13 00:53:59.465958 | orchestrator | Monday 13 April 2026 00:48:04 +0000 (0:00:00.983) 0:00:10.567 ********** 2026-04-13 00:53:59.465962 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-13 00:53:59.465966 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-13 00:53:59.465970 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-13 00:53:59.465973 | orchestrator | 2026-04-13 00:53:59.465977 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-13 00:53:59.465981 | orchestrator | Monday 13 April 2026 00:48:05 +0000 (0:00:01.703) 0:00:12.270 ********** 2026-04-13 00:53:59.465985 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-04-13 00:53:59.465988 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.465992 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-04-13 00:53:59.465996 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.465999 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-04-13 00:53:59.466003 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.466007 | orchestrator | 2026-04-13 00:53:59.466011 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-04-13 00:53:59.466103 | orchestrator | Monday 13 April 2026 00:48:06 +0000 (0:00:01.146) 0:00:13.416 ********** 2026-04-13 00:53:59.466115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-13 00:53:59.466144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-13 00:53:59.466149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-13 00:53:59.466153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-13 00:53:59.466162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-13 00:53:59.466173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-13 00:53:59.466177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-13 00:53:59.466182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-13 00:53:59.466186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-13 00:53:59.466190 | orchestrator | 2026-04-13 00:53:59.466194 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-04-13 00:53:59.466198 | orchestrator | Monday 13 April 2026 00:48:10 +0000 (0:00:03.528) 0:00:16.944 ********** 2026-04-13 00:53:59.466202 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:59.466206 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:59.466210 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:59.466213 | orchestrator | 2026-04-13 00:53:59.466217 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-04-13 00:53:59.466224 | orchestrator | Monday 13 April 2026 00:48:12 +0000 (0:00:02.064) 0:00:19.009 ********** 2026-04-13 00:53:59.466228 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-04-13 00:53:59.466232 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-04-13 00:53:59.466235 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-04-13 00:53:59.466239 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-04-13 00:53:59.466243 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-04-13 00:53:59.466246 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-04-13 00:53:59.466250 | orchestrator | 2026-04-13 00:53:59.466254 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-04-13 00:53:59.466258 | orchestrator | Monday 13 April 2026 00:48:15 +0000 (0:00:03.262) 0:00:22.272 ********** 2026-04-13 00:53:59.466261 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:59.466265 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:59.466269 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:59.466273 | orchestrator | 2026-04-13 00:53:59.466301 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-04-13 00:53:59.466305 | orchestrator | Monday 13 April 2026 00:48:17 +0000 (0:00:01.417) 0:00:23.690 ********** 2026-04-13 00:53:59.466309 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:53:59.466313 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:53:59.466316 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:53:59.466320 | orchestrator | 2026-04-13 00:53:59.466324 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-04-13 00:53:59.466327 | orchestrator | Monday 13 April 2026 00:48:18 +0000 (0:00:01.509) 0:00:25.199 ********** 2026-04-13 00:53:59.466336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-13 00:53:59.466341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:53:59.466348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:53:59.466353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9ae5ce329d0998bdbafa663987b8d7628a96e935', '__omit_place_holder__9ae5ce329d0998bdbafa663987b8d7628a96e935'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-13 00:53:59.466361 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.466365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-13 00:53:59.466369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:53:59.466373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:53:59.466381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9ae5ce329d0998bdbafa663987b8d7628a96e935', '__omit_place_holder__9ae5ce329d0998bdbafa663987b8d7628a96e935'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-13 00:53:59.466385 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.466392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-13 00:53:59.466396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:53:59.466403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:53:59.466407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9ae5ce329d0998bdbafa663987b8d7628a96e935', '__omit_place_holder__9ae5ce329d0998bdbafa663987b8d7628a96e935'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-13 00:53:59.466411 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.466415 | orchestrator | 2026-04-13 00:53:59.466419 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-04-13 00:53:59.466423 | orchestrator | Monday 13 April 2026 00:48:19 +0000 (0:00:01.236) 0:00:26.436 ********** 2026-04-13 00:53:59.466427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-13 00:53:59.466434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-13 00:53:59.466440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-13 00:53:59.466448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-13 00:53:59.466452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:53:59.466456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-13 00:53:59.466460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9ae5ce329d0998bdbafa663987b8d7628a96e935', '__omit_place_holder__9ae5ce329d0998bdbafa663987b8d7628a96e935'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-13 00:53:59.466467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:53:59.466472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9ae5ce329d0998bdbafa663987b8d7628a96e935', '__omit_place_holder__9ae5ce329d0998bdbafa663987b8d7628a96e935'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-13 00:53:59.466478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-13 00:53:59.466506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:53:59.466512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9ae5ce329d0998bdbafa663987b8d7628a96e935', '__omit_place_holder__9ae5ce329d0998bdbafa663987b8d7628a96e935'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-13 00:53:59.466516 | orchestrator | 2026-04-13 00:53:59.466519 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-04-13 00:53:59.466523 | orchestrator | Monday 13 April 2026 00:48:25 +0000 (0:00:05.834) 0:00:32.271 ********** 2026-04-13 00:53:59.466527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-13 00:53:59.466748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-13 00:53:59.466833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-13 00:53:59.466890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-13 00:53:59.466904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-13 00:53:59.466923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-13 00:53:59.466975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-13 00:53:59.466997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-13 00:53:59.467038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-13 00:53:59.467059 | orchestrator | 2026-04-13 00:53:59.467081 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-04-13 00:53:59.467101 | orchestrator | Monday 13 April 2026 00:48:30 +0000 (0:00:04.226) 0:00:36.497 ********** 2026-04-13 00:53:59.467135 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-13 00:53:59.467153 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-13 00:53:59.467165 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-13 00:53:59.467176 | orchestrator | 2026-04-13 00:53:59.467187 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-04-13 00:53:59.467198 | orchestrator | Monday 13 April 2026 00:48:32 +0000 (0:00:02.344) 0:00:38.841 ********** 2026-04-13 00:53:59.467208 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-13 00:53:59.467227 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-13 00:53:59.467238 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-13 00:53:59.467249 | orchestrator | 2026-04-13 00:53:59.467260 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-04-13 00:53:59.467270 | orchestrator | Monday 13 April 2026 00:48:37 +0000 (0:00:05.289) 0:00:44.131 ********** 2026-04-13 00:53:59.467282 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.467293 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.467304 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.467314 | orchestrator | 2026-04-13 00:53:59.467329 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-04-13 00:53:59.467352 | orchestrator | Monday 13 April 2026 00:48:39 +0000 (0:00:01.575) 0:00:45.707 ********** 2026-04-13 00:53:59.467380 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-13 00:53:59.467400 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-13 00:53:59.467417 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-13 00:53:59.467434 | orchestrator | 2026-04-13 00:53:59.467449 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-04-13 00:53:59.467466 | orchestrator | Monday 13 April 2026 00:48:43 +0000 (0:00:04.028) 0:00:49.735 ********** 2026-04-13 00:53:59.467510 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-13 00:53:59.467529 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-13 00:53:59.467546 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-13 00:53:59.467564 | orchestrator | 2026-04-13 00:53:59.467582 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-13 00:53:59.467600 | orchestrator | Monday 13 April 2026 00:48:45 +0000 (0:00:02.558) 0:00:52.293 ********** 2026-04-13 00:53:59.467618 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:53:59.467635 | orchestrator | 2026-04-13 00:53:59.467655 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-04-13 00:53:59.467674 | orchestrator | Monday 13 April 2026 00:48:46 +0000 (0:00:00.627) 0:00:52.921 ********** 2026-04-13 00:53:59.467695 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-04-13 00:53:59.467707 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-04-13 00:53:59.467718 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-04-13 00:53:59.467730 | orchestrator | 2026-04-13 00:53:59.467741 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-04-13 00:53:59.467752 | orchestrator | Monday 13 April 2026 00:48:48 +0000 (0:00:02.316) 0:00:55.238 ********** 2026-04-13 00:53:59.467777 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-04-13 00:53:59.467789 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-04-13 00:53:59.467800 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-04-13 00:53:59.467811 | orchestrator | 2026-04-13 00:53:59.467822 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-04-13 00:53:59.467833 | orchestrator | Monday 13 April 2026 00:48:50 +0000 (0:00:01.930) 0:00:57.168 ********** 2026-04-13 00:53:59.467844 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.467856 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.467867 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.467878 | orchestrator | 2026-04-13 00:53:59.467902 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-04-13 00:53:59.467917 | orchestrator | Monday 13 April 2026 00:48:50 +0000 (0:00:00.313) 0:00:57.481 ********** 2026-04-13 00:53:59.467935 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.467953 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.467969 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.467987 | orchestrator | 2026-04-13 00:53:59.468005 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-13 00:53:59.468022 | orchestrator | Monday 13 April 2026 00:48:51 +0000 (0:00:00.351) 0:00:57.833 ********** 2026-04-13 00:53:59.468041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-13 00:53:59.468077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-13 00:53:59.468101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-13 00:53:59.468118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-13 00:53:59.468150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-13 00:53:59.468186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-13 00:53:59.468209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-13 00:53:59.468239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-13 00:53:59.468253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-13 00:53:59.468264 | orchestrator | 2026-04-13 00:53:59.468275 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-13 00:53:59.468290 | orchestrator | Monday 13 April 2026 00:48:54 +0000 (0:00:03.343) 0:01:01.176 ********** 2026-04-13 00:53:59.468311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-13 00:53:59.468346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:53:59.468368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:53:59.468389 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.468421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-13 00:53:59.468436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:53:59.468454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:53:59.468468 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.468535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-13 00:53:59.468559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:53:59.468600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:53:59.468620 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.468637 | orchestrator | 2026-04-13 00:53:59.468649 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-13 00:53:59.468661 | orchestrator | Monday 13 April 2026 00:48:55 +0000 (0:00:00.643) 0:01:01.820 ********** 2026-04-13 00:53:59.468689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-13 00:53:59.468717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:53:59.468750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:53:59.468769 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.468788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-13 00:53:59.468820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:53:59.468838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:53:59.468855 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.468873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-13 00:53:59.468902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:53:59.468919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:53:59.468936 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.468954 | orchestrator | 2026-04-13 00:53:59.468979 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-04-13 00:53:59.468999 | orchestrator | Monday 13 April 2026 00:48:56 +0000 (0:00:00.796) 0:01:02.617 ********** 2026-04-13 00:53:59.469017 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-13 00:53:59.469036 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-13 00:53:59.469053 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-13 00:53:59.469092 | orchestrator | 2026-04-13 00:53:59.469110 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-04-13 00:53:59.469127 | orchestrator | Monday 13 April 2026 00:48:58 +0000 (0:00:01.880) 0:01:04.497 ********** 2026-04-13 00:53:59.469146 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-13 00:53:59.469166 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-13 00:53:59.469184 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-13 00:53:59.469204 | orchestrator | 2026-04-13 00:53:59.469223 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-04-13 00:53:59.469242 | orchestrator | Monday 13 April 2026 00:48:59 +0000 (0:00:01.696) 0:01:06.194 ********** 2026-04-13 00:53:59.469260 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-13 00:53:59.469277 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-13 00:53:59.469297 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-13 00:53:59.469316 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.469335 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-13 00:53:59.469353 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-13 00:53:59.469365 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.469375 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-13 00:53:59.469386 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.469398 | orchestrator | 2026-04-13 00:53:59.469417 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-04-13 00:53:59.469437 | orchestrator | Monday 13 April 2026 00:49:00 +0000 (0:00:00.882) 0:01:07.077 ********** 2026-04-13 00:53:59.469457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-13 00:53:59.469565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-13 00:53:59.469592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-13 00:53:59.469637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-13 00:53:59.469658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-13 00:53:59.469676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-13 00:53:59.469697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-13 00:53:59.472059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-13 00:53:59.472117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-13 00:53:59.472128 | orchestrator | 2026-04-13 00:53:59.472139 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-04-13 00:53:59.472149 | orchestrator | Monday 13 April 2026 00:49:02 +0000 (0:00:02.094) 0:01:09.172 ********** 2026-04-13 00:53:59.472174 | orchestrator | changed: [testbed-node-0] => { 2026-04-13 00:53:59.472184 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:53:59.472194 | orchestrator | } 2026-04-13 00:53:59.472204 | orchestrator | changed: [testbed-node-1] => { 2026-04-13 00:53:59.472214 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:53:59.472223 | orchestrator | } 2026-04-13 00:53:59.472233 | orchestrator | changed: [testbed-node-2] => { 2026-04-13 00:53:59.472242 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:53:59.472251 | orchestrator | } 2026-04-13 00:53:59.472261 | orchestrator | 2026-04-13 00:53:59.472271 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-13 00:53:59.472280 | orchestrator | Monday 13 April 2026 00:49:03 +0000 (0:00:00.375) 0:01:09.547 ********** 2026-04-13 00:53:59.472297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-13 00:53:59.472309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:53:59.472319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:53:59.472329 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.472339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-13 00:53:59.472364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:53:59.472385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:53:59.472401 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.472423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-13 00:53:59.472439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:53:59.472455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:53:59.472471 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.472571 | orchestrator | 2026-04-13 00:53:59.472591 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-04-13 00:53:59.472607 | orchestrator | Monday 13 April 2026 00:49:04 +0000 (0:00:01.281) 0:01:10.828 ********** 2026-04-13 00:53:59.472623 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:53:59.472639 | orchestrator | 2026-04-13 00:53:59.472656 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-04-13 00:53:59.472665 | orchestrator | Monday 13 April 2026 00:49:05 +0000 (0:00:00.772) 0:01:11.601 ********** 2026-04-13 00:53:59.472685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:53:59.472704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-13 00:53:59.472729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.472745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.472760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:53:59.472776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-13 00:53:59.472797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.472815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.472827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:53:59.472836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-13 00:53:59.472845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.472856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.472870 | orchestrator | 2026-04-13 00:53:59.472883 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-04-13 00:53:59.472903 | orchestrator | Monday 13 April 2026 00:49:08 +0000 (0:00:03.310) 0:01:14.912 ********** 2026-04-13 00:53:59.472926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:53:59.472942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-13 00:53:59.472960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.472985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.472994 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.473011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:53:59.473035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:53:59.473044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-13 00:53:59.473056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-13 00:53:59.473065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.473073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.473081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.473095 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.473103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.473112 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.473119 | orchestrator | 2026-04-13 00:53:59.473131 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-04-13 00:53:59.473140 | orchestrator | Monday 13 April 2026 00:49:09 +0000 (0:00:00.633) 0:01:15.545 ********** 2026-04-13 00:53:59.473149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.473160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.473169 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.473177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.473185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.473192 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.473201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.473209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.473217 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.473225 | orchestrator | 2026-04-13 00:53:59.473233 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-04-13 00:53:59.473241 | orchestrator | Monday 13 April 2026 00:49:09 +0000 (0:00:00.913) 0:01:16.458 ********** 2026-04-13 00:53:59.473249 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:59.473256 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:59.473264 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:59.473272 | orchestrator | 2026-04-13 00:53:59.473279 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-04-13 00:53:59.473287 | orchestrator | Monday 13 April 2026 00:49:11 +0000 (0:00:01.177) 0:01:17.636 ********** 2026-04-13 00:53:59.473295 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:59.473302 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:59.473310 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:59.473318 | orchestrator | 2026-04-13 00:53:59.473325 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-04-13 00:53:59.473339 | orchestrator | Monday 13 April 2026 00:49:13 +0000 (0:00:01.950) 0:01:19.587 ********** 2026-04-13 00:53:59.473347 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:53:59.473355 | orchestrator | 2026-04-13 00:53:59.473362 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-04-13 00:53:59.473370 | orchestrator | Monday 13 April 2026 00:49:13 +0000 (0:00:00.625) 0:01:20.212 ********** 2026-04-13 00:53:59.473401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:53:59.473418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.473427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.473440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:53:59.473449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.473462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.473476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:53:59.473507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.473521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.473529 | orchestrator | 2026-04-13 00:53:59.473537 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-04-13 00:53:59.473545 | orchestrator | Monday 13 April 2026 00:49:18 +0000 (0:00:04.859) 0:01:25.072 ********** 2026-04-13 00:53:59.473554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:53:59.473571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.473584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.473592 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.473601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:53:59.473613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.473630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.473638 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.473646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:53:59.473660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.473668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.473676 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.473684 | orchestrator | 2026-04-13 00:53:59.473692 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-04-13 00:53:59.473701 | orchestrator | Monday 13 April 2026 00:49:19 +0000 (0:00:01.104) 0:01:26.177 ********** 2026-04-13 00:53:59.473709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.473721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.473735 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.473743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.473752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.473759 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.473767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.473775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.473783 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.473791 | orchestrator | 2026-04-13 00:53:59.473799 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-04-13 00:53:59.473807 | orchestrator | Monday 13 April 2026 00:49:20 +0000 (0:00:00.887) 0:01:27.064 ********** 2026-04-13 00:53:59.473815 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:59.473822 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:59.473830 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:59.473838 | orchestrator | 2026-04-13 00:53:59.473845 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-04-13 00:53:59.473853 | orchestrator | Monday 13 April 2026 00:49:21 +0000 (0:00:01.282) 0:01:28.347 ********** 2026-04-13 00:53:59.473861 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:59.473868 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:59.473876 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:59.473884 | orchestrator | 2026-04-13 00:53:59.473891 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-04-13 00:53:59.473899 | orchestrator | Monday 13 April 2026 00:49:24 +0000 (0:00:02.285) 0:01:30.632 ********** 2026-04-13 00:53:59.473907 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.473915 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.473923 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.473930 | orchestrator | 2026-04-13 00:53:59.473938 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-04-13 00:53:59.473945 | orchestrator | Monday 13 April 2026 00:49:24 +0000 (0:00:00.547) 0:01:31.179 ********** 2026-04-13 00:53:59.473953 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:53:59.473961 | orchestrator | 2026-04-13 00:53:59.473972 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-04-13 00:53:59.473985 | orchestrator | Monday 13 April 2026 00:49:25 +0000 (0:00:00.685) 0:01:31.865 ********** 2026-04-13 00:53:59.474005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-13 00:53:59.474081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-13 00:53:59.474098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-13 00:53:59.474112 | orchestrator | 2026-04-13 00:53:59.474126 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-04-13 00:53:59.474140 | orchestrator | Monday 13 April 2026 00:49:29 +0000 (0:00:03.777) 0:01:35.643 ********** 2026-04-13 00:53:59.474153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-13 00:53:59.474167 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.474181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-13 00:53:59.474190 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.474211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-13 00:53:59.474225 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.474239 | orchestrator | 2026-04-13 00:53:59.474253 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-04-13 00:53:59.474265 | orchestrator | Monday 13 April 2026 00:49:31 +0000 (0:00:02.048) 0:01:37.691 ********** 2026-04-13 00:53:59.474287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-13 00:53:59.474303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-13 00:53:59.474318 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.474332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-13 00:53:59.474347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-13 00:53:59.474360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-13 00:53:59.474373 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.474386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-13 00:53:59.474399 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.474421 | orchestrator | 2026-04-13 00:53:59.474434 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-04-13 00:53:59.474448 | orchestrator | Monday 13 April 2026 00:49:34 +0000 (0:00:02.801) 0:01:40.493 ********** 2026-04-13 00:53:59.474463 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.474472 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.474480 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.474508 | orchestrator | 2026-04-13 00:53:59.474524 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-04-13 00:53:59.474532 | orchestrator | Monday 13 April 2026 00:49:34 +0000 (0:00:00.489) 0:01:40.982 ********** 2026-04-13 00:53:59.474540 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.474548 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.474555 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.474563 | orchestrator | 2026-04-13 00:53:59.474571 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-04-13 00:53:59.474579 | orchestrator | Monday 13 April 2026 00:49:35 +0000 (0:00:01.329) 0:01:42.311 ********** 2026-04-13 00:53:59.474587 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:53:59.474595 | orchestrator | 2026-04-13 00:53:59.474602 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-04-13 00:53:59.474610 | orchestrator | Monday 13 April 2026 00:49:36 +0000 (0:00:01.088) 0:01:43.400 ********** 2026-04-13 00:53:59.474625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:53:59.474635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.474645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.474658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:53:59.474676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.474688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.474697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.474705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.474714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:53:59.474734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.474743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.474756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.474764 | orchestrator | 2026-04-13 00:53:59.474772 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-04-13 00:53:59.474780 | orchestrator | Monday 13 April 2026 00:49:41 +0000 (0:00:04.357) 0:01:47.758 ********** 2026-04-13 00:53:59.474789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:53:59.474802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.478156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/2026-04-13 00:53:59 | INFO  | Task 6d35cdd8-bc5e-43b0-b502-fd7eea008f05 is in state STARTED 2026-04-13 00:53:59.479255 | orchestrator | etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.479301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.479309 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.479323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:53:59.479331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.479349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.479365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.479371 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.479380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:53:59.479388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.479394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.479404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.479410 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.479417 | orchestrator | 2026-04-13 00:53:59.479423 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-04-13 00:53:59.479431 | orchestrator | Monday 13 April 2026 00:49:42 +0000 (0:00:01.022) 0:01:48.780 ********** 2026-04-13 00:53:59.479438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.479447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.479458 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.479464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.479471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.479477 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.479503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.479510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.479518 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.479525 | orchestrator | 2026-04-13 00:53:59.479531 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-04-13 00:53:59.479538 | orchestrator | Monday 13 April 2026 00:49:43 +0000 (0:00:01.255) 0:01:50.036 ********** 2026-04-13 00:53:59.479544 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:59.479550 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:59.479556 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:59.479562 | orchestrator | 2026-04-13 00:53:59.479568 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-04-13 00:53:59.479574 | orchestrator | Monday 13 April 2026 00:49:44 +0000 (0:00:01.259) 0:01:51.296 ********** 2026-04-13 00:53:59.479580 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:59.479590 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:59.479596 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:59.479602 | orchestrator | 2026-04-13 00:53:59.479608 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-04-13 00:53:59.479614 | orchestrator | Monday 13 April 2026 00:49:46 +0000 (0:00:01.872) 0:01:53.168 ********** 2026-04-13 00:53:59.479621 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.479627 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.479633 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.479639 | orchestrator | 2026-04-13 00:53:59.479645 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-04-13 00:53:59.479651 | orchestrator | Monday 13 April 2026 00:49:46 +0000 (0:00:00.283) 0:01:53.451 ********** 2026-04-13 00:53:59.479657 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.479663 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.479669 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.479675 | orchestrator | 2026-04-13 00:53:59.479682 | orchestrator | TASK [include_role : designate] ************************************************ 2026-04-13 00:53:59.479688 | orchestrator | Monday 13 April 2026 00:49:47 +0000 (0:00:00.286) 0:01:53.738 ********** 2026-04-13 00:53:59.479694 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:53:59.479700 | orchestrator | 2026-04-13 00:53:59.479706 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-04-13 00:53:59.479712 | orchestrator | Monday 13 April 2026 00:49:48 +0000 (0:00:01.059) 0:01:54.797 ********** 2026-04-13 00:53:59.479719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:53:59.479733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-13 00:53:59.479740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.479755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.479762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.479768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.479775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.479785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:53:59.479792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-13 00:53:59.479805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.479812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.479819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.479827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.479839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:53:59.479846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.479860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-13 00:53:59.479867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.479875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.479882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.479893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.479902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.479912 | orchestrator | 2026-04-13 00:53:59.479919 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-04-13 00:53:59.479927 | orchestrator | Monday 13 April 2026 00:49:52 +0000 (0:00:04.568) 0:01:59.365 ********** 2026-04-13 00:53:59.479937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:53:59.479944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-13 00:53:59.479952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.479959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.479971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.479985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.479994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.480000 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.480007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:53:59.480013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-13 00:53:59.480020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.480035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.480044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.480051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.480058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.480064 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.480071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:53:59.480081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-13 00:53:59.480092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.480101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.480108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.480115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.480121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.480127 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.480134 | orchestrator | 2026-04-13 00:53:59.480140 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-04-13 00:53:59.480146 | orchestrator | Monday 13 April 2026 00:49:53 +0000 (0:00:01.028) 0:02:00.394 ********** 2026-04-13 00:53:59.480154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.480164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.480171 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.480181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.480187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.480194 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.480200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.480207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.480213 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.480219 | orchestrator | 2026-04-13 00:53:59.480226 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-04-13 00:53:59.480232 | orchestrator | Monday 13 April 2026 00:49:55 +0000 (0:00:01.270) 0:02:01.664 ********** 2026-04-13 00:53:59.480241 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:59.480247 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:59.480253 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:59.480259 | orchestrator | 2026-04-13 00:53:59.480266 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-04-13 00:53:59.480272 | orchestrator | Monday 13 April 2026 00:49:56 +0000 (0:00:01.494) 0:02:03.158 ********** 2026-04-13 00:53:59.480278 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:59.480285 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:59.480291 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:59.480297 | orchestrator | 2026-04-13 00:53:59.480303 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-04-13 00:53:59.480310 | orchestrator | Monday 13 April 2026 00:49:58 +0000 (0:00:02.240) 0:02:05.399 ********** 2026-04-13 00:53:59.480316 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.480322 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.480328 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.480334 | orchestrator | 2026-04-13 00:53:59.480341 | orchestrator | TASK [include_role : glance] *************************************************** 2026-04-13 00:53:59.480347 | orchestrator | Monday 13 April 2026 00:49:59 +0000 (0:00:00.358) 0:02:05.757 ********** 2026-04-13 00:53:59.480353 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:53:59.480360 | orchestrator | 2026-04-13 00:53:59.480366 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-04-13 00:53:59.480372 | orchestrator | Monday 13 April 2026 00:50:00 +0000 (0:00:01.054) 0:02:06.812 ********** 2026-04-13 00:53:59.480379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-13 00:53:59.480400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-13 00:53:59.480409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-13 00:53:59.480440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-13 00:53:59.480453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-13 00:53:59.480478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-13 00:53:59.480508 | orchestrator | 2026-04-13 00:53:59.480522 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-04-13 00:53:59.480532 | orchestrator | Monday 13 April 2026 00:50:05 +0000 (0:00:04.922) 0:02:11.735 ********** 2026-04-13 00:53:59.480543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-13 00:53:59.480570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-13 00:53:59.480582 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.480599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-13 00:53:59.480623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-13 00:53:59.480631 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.480644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-13 00:53:59.480789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-13 00:53:59.480807 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.480813 | orchestrator | 2026-04-13 00:53:59.480820 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-04-13 00:53:59.480827 | orchestrator | Monday 13 April 2026 00:50:08 +0000 (0:00:03.515) 0:02:15.250 ********** 2026-04-13 00:53:59.480834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-13 00:53:59.480845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-13 00:53:59.480851 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.480858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-13 00:53:59.480872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-13 00:53:59.480879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-13 00:53:59.480885 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.480892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-13 00:53:59.480898 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.480905 | orchestrator | 2026-04-13 00:53:59.480911 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-04-13 00:53:59.480917 | orchestrator | Monday 13 April 2026 00:50:13 +0000 (0:00:04.244) 0:02:19.495 ********** 2026-04-13 00:53:59.480923 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:59.480930 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:59.480936 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:59.480942 | orchestrator | 2026-04-13 00:53:59.480949 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-04-13 00:53:59.480960 | orchestrator | Monday 13 April 2026 00:50:14 +0000 (0:00:01.558) 0:02:21.053 ********** 2026-04-13 00:53:59.480967 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:59.480974 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:59.480980 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:59.480986 | orchestrator | 2026-04-13 00:53:59.480992 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-04-13 00:53:59.480999 | orchestrator | Monday 13 April 2026 00:50:16 +0000 (0:00:02.272) 0:02:23.325 ********** 2026-04-13 00:53:59.481005 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.481011 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.481017 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.481023 | orchestrator | 2026-04-13 00:53:59.481029 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-04-13 00:53:59.481036 | orchestrator | Monday 13 April 2026 00:50:17 +0000 (0:00:00.382) 0:02:23.708 ********** 2026-04-13 00:53:59.481042 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:53:59.481048 | orchestrator | 2026-04-13 00:53:59.481054 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-04-13 00:53:59.481060 | orchestrator | Monday 13 April 2026 00:50:18 +0000 (0:00:00.875) 0:02:24.583 ********** 2026-04-13 00:53:59.481070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:53:59.481081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:53:59.481088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:53:59.481095 | orchestrator | 2026-04-13 00:53:59.481101 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-04-13 00:53:59.481107 | orchestrator | Monday 13 April 2026 00:50:21 +0000 (0:00:03.521) 0:02:28.105 ********** 2026-04-13 00:53:59.481118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:53:59.481124 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.481131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:53:59.481142 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.481151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:53:59.481158 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.481164 | orchestrator | 2026-04-13 00:53:59.481170 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-04-13 00:53:59.481177 | orchestrator | Monday 13 April 2026 00:50:22 +0000 (0:00:00.452) 0:02:28.558 ********** 2026-04-13 00:53:59.481183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.481190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.481197 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.481203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.481210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.481216 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.481222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.481229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.481235 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.481241 | orchestrator | 2026-04-13 00:53:59.481247 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-04-13 00:53:59.481253 | orchestrator | Monday 13 April 2026 00:50:22 +0000 (0:00:00.709) 0:02:29.267 ********** 2026-04-13 00:53:59.481259 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:59.481266 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:59.481272 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:59.481278 | orchestrator | 2026-04-13 00:53:59.481284 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-04-13 00:53:59.481290 | orchestrator | Monday 13 April 2026 00:50:24 +0000 (0:00:01.481) 0:02:30.748 ********** 2026-04-13 00:53:59.481296 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:59.481302 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:59.481309 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:59.481315 | orchestrator | 2026-04-13 00:53:59.481321 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-04-13 00:53:59.481337 | orchestrator | Monday 13 April 2026 00:50:26 +0000 (0:00:01.954) 0:02:32.702 ********** 2026-04-13 00:53:59.481344 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.481350 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.481356 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.481362 | orchestrator | 2026-04-13 00:53:59.481369 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-04-13 00:53:59.481375 | orchestrator | Monday 13 April 2026 00:50:26 +0000 (0:00:00.491) 0:02:33.194 ********** 2026-04-13 00:53:59.481381 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:53:59.481387 | orchestrator | 2026-04-13 00:53:59.481395 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-04-13 00:53:59.481402 | orchestrator | Monday 13 April 2026 00:50:27 +0000 (0:00:00.867) 0:02:34.061 ********** 2026-04-13 00:53:59.481414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-13 00:53:59.481429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-13 00:53:59.481447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-13 00:53:59.481456 | orchestrator | 2026-04-13 00:53:59.481464 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-04-13 00:53:59.481471 | orchestrator | Monday 13 April 2026 00:50:30 +0000 (0:00:03.282) 0:02:37.344 ********** 2026-04-13 00:53:59.481508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-13 00:53:59.481525 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.481533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-13 00:53:59.481545 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.481562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-13 00:53:59.481571 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.481579 | orchestrator | 2026-04-13 00:53:59.481585 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-04-13 00:53:59.481592 | orchestrator | Monday 13 April 2026 00:50:31 +0000 (0:00:00.794) 0:02:38.139 ********** 2026-04-13 00:53:59.481601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-13 00:53:59.481611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-13 00:53:59.481619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-13 00:53:59.481632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-13 00:53:59.481640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-13 00:53:59.481648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-13 00:53:59.481659 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.481666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-13 00:53:59.481673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-13 00:53:59.481679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-13 00:53:59.481689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-13 00:53:59.481696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-13 00:53:59.481703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-13 00:53:59.481709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-13 00:53:59.481716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-13 00:53:59.481722 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.481728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-13 00:53:59.481739 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.481745 | orchestrator | 2026-04-13 00:53:59.481752 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-04-13 00:53:59.481758 | orchestrator | Monday 13 April 2026 00:50:32 +0000 (0:00:01.071) 0:02:39.210 ********** 2026-04-13 00:53:59.481764 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:59.481771 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:59.481777 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:59.481788 | orchestrator | 2026-04-13 00:53:59.481798 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-04-13 00:53:59.481809 | orchestrator | Monday 13 April 2026 00:50:34 +0000 (0:00:01.384) 0:02:40.595 ********** 2026-04-13 00:53:59.481819 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:59.481830 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:59.481841 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:59.481848 | orchestrator | 2026-04-13 00:53:59.481854 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-04-13 00:53:59.481860 | orchestrator | Monday 13 April 2026 00:50:36 +0000 (0:00:02.408) 0:02:43.004 ********** 2026-04-13 00:53:59.481867 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.481874 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.481886 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.481895 | orchestrator | 2026-04-13 00:53:59.481907 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-04-13 00:53:59.481916 | orchestrator | Monday 13 April 2026 00:50:37 +0000 (0:00:00.557) 0:02:43.561 ********** 2026-04-13 00:53:59.481927 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.481938 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.481949 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.481960 | orchestrator | 2026-04-13 00:53:59.481971 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-04-13 00:53:59.481980 | orchestrator | Monday 13 April 2026 00:50:37 +0000 (0:00:00.331) 0:02:43.893 ********** 2026-04-13 00:53:59.481992 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:53:59.481999 | orchestrator | 2026-04-13 00:53:59.482005 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-04-13 00:53:59.482012 | orchestrator | Monday 13 April 2026 00:50:38 +0000 (0:00:00.994) 0:02:44.887 ********** 2026-04-13 00:53:59.482048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-13 00:53:59.482057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 00:53:59.482076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-13 00:53:59.482089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-13 00:53:59.482117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 00:53:59.482126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-13 00:53:59.482137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-13 00:53:59.482153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 00:53:59.482164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-13 00:53:59.482175 | orchestrator | 2026-04-13 00:53:59.482186 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-04-13 00:53:59.482199 | orchestrator | Monday 13 April 2026 00:50:43 +0000 (0:00:05.196) 0:02:50.083 ********** 2026-04-13 00:53:59.482216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-13 00:53:59.482224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 00:53:59.482234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-13 00:53:59.482247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-13 00:53:59.482254 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.482261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 00:53:59.482272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-13 00:53:59.482280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-13 00:53:59.482286 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.482295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 00:53:59.482310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-13 00:53:59.482317 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.482323 | orchestrator | 2026-04-13 00:53:59.482330 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-04-13 00:53:59.482336 | orchestrator | Monday 13 April 2026 00:50:44 +0000 (0:00:00.814) 0:02:50.898 ********** 2026-04-13 00:53:59.482346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-13 00:53:59.482357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-13 00:53:59.482368 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.482379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-13 00:53:59.482390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-13 00:53:59.482400 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.482410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-13 00:53:59.482427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-13 00:53:59.482438 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.482448 | orchestrator | 2026-04-13 00:53:59.482458 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-04-13 00:53:59.482464 | orchestrator | Monday 13 April 2026 00:50:45 +0000 (0:00:01.144) 0:02:52.043 ********** 2026-04-13 00:53:59.482470 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:59.482476 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:59.482530 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:59.482538 | orchestrator | 2026-04-13 00:53:59.482551 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-04-13 00:53:59.482557 | orchestrator | Monday 13 April 2026 00:50:46 +0000 (0:00:01.264) 0:02:53.307 ********** 2026-04-13 00:53:59.482563 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:59.482570 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:59.482576 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:59.482582 | orchestrator | 2026-04-13 00:53:59.482588 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-04-13 00:53:59.482594 | orchestrator | Monday 13 April 2026 00:50:49 +0000 (0:00:02.197) 0:02:55.504 ********** 2026-04-13 00:53:59.482600 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.482606 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.482612 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.482618 | orchestrator | 2026-04-13 00:53:59.482631 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-04-13 00:53:59.482637 | orchestrator | Monday 13 April 2026 00:50:49 +0000 (0:00:00.453) 0:02:55.958 ********** 2026-04-13 00:53:59.482643 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:53:59.482649 | orchestrator | 2026-04-13 00:53:59.482655 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-04-13 00:53:59.482662 | orchestrator | Monday 13 April 2026 00:50:50 +0000 (0:00:00.925) 0:02:56.883 ********** 2026-04-13 00:53:59.482669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:53:59.482676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.482688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:53:59.482701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:53:59.482709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.482716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.482723 | orchestrator | 2026-04-13 00:53:59.482729 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-04-13 00:53:59.482735 | orchestrator | Monday 13 April 2026 00:50:54 +0000 (0:00:04.158) 0:03:01.042 ********** 2026-04-13 00:53:59.482742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:53:59.482757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.482763 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.482773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:53:59.482780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.482786 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.482793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:53:59.482804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.482815 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.482821 | orchestrator | 2026-04-13 00:53:59.482828 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-04-13 00:53:59.482836 | orchestrator | Monday 13 April 2026 00:50:55 +0000 (0:00:01.223) 0:03:02.265 ********** 2026-04-13 00:53:59.482847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.482859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.482869 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.482883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.482894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.482905 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.482915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.482925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.482936 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.482945 | orchestrator | 2026-04-13 00:53:59.482951 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-04-13 00:53:59.482957 | orchestrator | Monday 13 April 2026 00:50:56 +0000 (0:00:01.098) 0:03:03.363 ********** 2026-04-13 00:53:59.482964 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:59.482970 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:59.482976 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:59.482982 | orchestrator | 2026-04-13 00:53:59.482988 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-04-13 00:53:59.482994 | orchestrator | Monday 13 April 2026 00:50:58 +0000 (0:00:01.298) 0:03:04.661 ********** 2026-04-13 00:53:59.483000 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:59.483007 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:59.483013 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:59.483019 | orchestrator | 2026-04-13 00:53:59.483025 | orchestrator | TASK [include_role : manila] *************************************************** 2026-04-13 00:53:59.483032 | orchestrator | Monday 13 April 2026 00:51:00 +0000 (0:00:02.690) 0:03:07.352 ********** 2026-04-13 00:53:59.483038 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:53:59.483049 | orchestrator | 2026-04-13 00:53:59.483056 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-04-13 00:53:59.483062 | orchestrator | Monday 13 April 2026 00:51:02 +0000 (0:00:01.520) 0:03:08.872 ********** 2026-04-13 00:53:59.483069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:53:59.483080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.483089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.483095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.483101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:53:59.483110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.483119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.483125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.483134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:53:59.483140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.483146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.483155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.483161 | orchestrator | 2026-04-13 00:53:59.483166 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-04-13 00:53:59.483172 | orchestrator | Monday 13 April 2026 00:51:06 +0000 (0:00:04.195) 0:03:13.068 ********** 2026-04-13 00:53:59.483183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:53:59.483191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.483197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.483203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.483212 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.483218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:53:59.483227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.483235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.483248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.483258 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.483268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:53:59.483284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.483294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.483311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.483320 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.483330 | orchestrator | 2026-04-13 00:53:59.483336 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-04-13 00:53:59.483342 | orchestrator | Monday 13 April 2026 00:51:07 +0000 (0:00:01.119) 0:03:14.187 ********** 2026-04-13 00:53:59.483347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.483353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.483359 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.483364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.483370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.483375 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.483381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.483409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.483419 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.483429 | orchestrator | 2026-04-13 00:53:59.483438 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-04-13 00:53:59.483447 | orchestrator | Monday 13 April 2026 00:51:09 +0000 (0:00:01.433) 0:03:15.621 ********** 2026-04-13 00:53:59.483456 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:59.483463 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:59.483469 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:59.483475 | orchestrator | 2026-04-13 00:53:59.483504 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-04-13 00:53:59.483513 | orchestrator | Monday 13 April 2026 00:51:10 +0000 (0:00:01.286) 0:03:16.907 ********** 2026-04-13 00:53:59.483522 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:59.483531 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:59.483540 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:59.483550 | orchestrator | 2026-04-13 00:53:59.483558 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-04-13 00:53:59.483567 | orchestrator | Monday 13 April 2026 00:51:12 +0000 (0:00:02.224) 0:03:19.131 ********** 2026-04-13 00:53:59.483574 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:53:59.483580 | orchestrator | 2026-04-13 00:53:59.483585 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-04-13 00:53:59.483591 | orchestrator | Monday 13 April 2026 00:51:13 +0000 (0:00:01.120) 0:03:20.252 ********** 2026-04-13 00:53:59.483596 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-13 00:53:59.483602 | orchestrator | 2026-04-13 00:53:59.483607 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-04-13 00:53:59.483613 | orchestrator | Monday 13 April 2026 00:51:15 +0000 (0:00:02.069) 0:03:22.321 ********** 2026-04-13 00:53:59.483625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:53:59.483640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-13 00:53:59.483646 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.483652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:53:59.483662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-13 00:53:59.483667 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.483676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:53:59.483686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-13 00:53:59.483691 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.483697 | orchestrator | 2026-04-13 00:53:59.483702 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-04-13 00:53:59.483708 | orchestrator | Monday 13 April 2026 00:51:18 +0000 (0:00:02.887) 0:03:25.209 ********** 2026-04-13 00:53:59.483717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:53:59.483728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-13 00:53:59.483736 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.483742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:53:59.483748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-13 00:53:59.483754 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.483775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:53:59.483796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-13 00:53:59.483805 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.483814 | orchestrator | 2026-04-13 00:53:59.483822 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-04-13 00:53:59.483830 | orchestrator | Monday 13 April 2026 00:51:21 +0000 (0:00:02.415) 0:03:27.625 ********** 2026-04-13 00:53:59.483839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-13 00:53:59.483849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-13 00:53:59.483858 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.483867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-13 00:53:59.483879 | orchestrator | skipping: [testbed-no2026-04-13 00:53:59 | INFO  | Task 35725425-f91c-4a23-a9a1-d14c4e2bbd28 is in state STARTED 2026-04-13 00:53:59.483890 | orchestrator | 2026-04-13 00:53:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:53:59.484011 | orchestrator | de-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-13 00:53:59.484021 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.484030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-13 00:53:59.484036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-13 00:53:59.484042 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.484047 | orchestrator | 2026-04-13 00:53:59.484053 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-04-13 00:53:59.484058 | orchestrator | Monday 13 April 2026 00:51:23 +0000 (0:00:02.834) 0:03:30.460 ********** 2026-04-13 00:53:59.484064 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:59.484069 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:59.484074 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:59.484080 | orchestrator | 2026-04-13 00:53:59.484085 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-04-13 00:53:59.484090 | orchestrator | Monday 13 April 2026 00:51:25 +0000 (0:00:01.938) 0:03:32.398 ********** 2026-04-13 00:53:59.484096 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.484101 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.484107 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.484113 | orchestrator | 2026-04-13 00:53:59.484118 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-04-13 00:53:59.484123 | orchestrator | Monday 13 April 2026 00:51:27 +0000 (0:00:01.235) 0:03:33.633 ********** 2026-04-13 00:53:59.484129 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.484135 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.484140 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.484145 | orchestrator | 2026-04-13 00:53:59.484151 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-04-13 00:53:59.484156 | orchestrator | Monday 13 April 2026 00:51:27 +0000 (0:00:00.278) 0:03:33.912 ********** 2026-04-13 00:53:59.484161 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:53:59.484167 | orchestrator | 2026-04-13 00:53:59.484172 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-04-13 00:53:59.484177 | orchestrator | Monday 13 April 2026 00:51:28 +0000 (0:00:01.005) 0:03:34.917 ********** 2026-04-13 00:53:59.484188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2024.2/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-13 00:53:59.484199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2024.2/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-13 00:53:59.484207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2024.2/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-13 00:53:59.484213 | orchestrator | 2026-04-13 00:53:59.484219 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-04-13 00:53:59.484224 | orchestrator | Monday 13 April 2026 00:51:30 +0000 (0:00:01.799) 0:03:36.717 ********** 2026-04-13 00:53:59.484230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2024.2/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-13 00:53:59.484236 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.484242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2024.2/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-13 00:53:59.484254 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.484260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2024.2/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-13 00:53:59.484265 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.484273 | orchestrator | 2026-04-13 00:53:59.484284 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-04-13 00:53:59.484293 | orchestrator | Monday 13 April 2026 00:51:30 +0000 (0:00:00.422) 0:03:37.139 ********** 2026-04-13 00:53:59.484306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-13 00:53:59.484317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-13 00:53:59.484327 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.484337 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.484347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-13 00:53:59.484361 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.484371 | orchestrator | 2026-04-13 00:53:59.484381 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-04-13 00:53:59.484390 | orchestrator | Monday 13 April 2026 00:51:31 +0000 (0:00:00.627) 0:03:37.767 ********** 2026-04-13 00:53:59.484399 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.484408 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.484418 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.484429 | orchestrator | 2026-04-13 00:53:59.484438 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-04-13 00:53:59.484448 | orchestrator | Monday 13 April 2026 00:51:32 +0000 (0:00:00.737) 0:03:38.504 ********** 2026-04-13 00:53:59.484456 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.484462 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.484467 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.484472 | orchestrator | 2026-04-13 00:53:59.484478 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-04-13 00:53:59.484501 | orchestrator | Monday 13 April 2026 00:51:33 +0000 (0:00:01.321) 0:03:39.826 ********** 2026-04-13 00:53:59.484507 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.484513 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.484518 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.484523 | orchestrator | 2026-04-13 00:53:59.484529 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-04-13 00:53:59.484540 | orchestrator | Monday 13 April 2026 00:51:33 +0000 (0:00:00.303) 0:03:40.129 ********** 2026-04-13 00:53:59.484545 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:53:59.484551 | orchestrator | 2026-04-13 00:53:59.484556 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-04-13 00:53:59.484561 | orchestrator | Monday 13 April 2026 00:51:34 +0000 (0:00:01.208) 0:03:41.337 ********** 2026-04-13 00:53:59.484569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:53:59.484581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.484602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:53:59.484614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2024.2/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-13 00:53:59.484631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2024.2/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-13 00:53:59.484639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.484650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.484659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-13 00:53:59.484674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2024.2/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-13 00:53:59.484690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-13 00:53:59.484701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2024.2/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-13 00:53:59.484717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-13 00:53:59.484728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.484739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-13 00:53:59.484749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-13 00:53:59.484756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.484763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-13 00:53:59.484774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:53:59.484781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-13 00:53:59.484790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-13 00:53:59.484800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.484807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-13 00:53:59.484814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-13 00:53:59.484823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.484834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2024.2/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-13 00:53:59.484844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.484851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-13 00:53:59.484859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2024.2/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-13 00:53:59.484869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-13 00:53:59.484879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2024.2/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-13 00:53:59.484889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.484895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-13 00:53:59.484901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-13 00:53:59.484908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.484915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-13 00:53:59.484925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-13 00:53:59.484942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-13 00:53:59.484949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2024.2/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-13 00:53:59.484956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-13 00:53:59.484963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.484973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-13 00:53:59.484980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-13 00:53:59.484993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.485000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-13 00:53:59.485007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2024.2/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-13 00:53:59.485013 | orchestrator | 2026-04-13 00:53:59.485018 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-04-13 00:53:59.485024 | orchestrator | Monday 13 April 2026 00:51:39 +0000 (0:00:04.561) 0:03:45.899 ********** 2026-04-13 00:53:59.485033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:53:59.485042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.485052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2024.2/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-13 00:53:59.485058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2024.2/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-13 00:53:59.485064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.485073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-13 00:53:59.485082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-13 00:53:59.485092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-13 00:53:59.485098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:53:59.485104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-13 00:53:59.485109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.485118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.485132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2024.2/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-13 00:53:59.485138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-13 00:53:59.485144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2024.2/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-13 00:53:59.485150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-13 00:53:59.485159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.485171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.485177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-13 00:53:59.485183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-13 00:53:59.485189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:53:59.485195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-13 00:53:59.485207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2024.2/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-13 00:53:59.485213 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.485222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.485228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-13 00:53:59.485238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2024.2/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-13 00:53:59.485248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-13 00:53:59.485365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2024.2/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-13 00:53:59.485386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.485392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.485398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-13 00:53:59.485404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-13 00:53:59.485410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-13 00:53:59.485426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-13 00:53:59.485432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.485441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-13 00:53:59.485447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-13 00:53:59.485453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-13 00:53:59.485462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2024.2/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-13 00:53:59.485471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.485477 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.485507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-13 00:53:59.485514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-13 00:53:59.485521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.485527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-13 00:53:59.485540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2024.2/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-13 00:53:59.485546 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.485551 | orchestrator | 2026-04-13 00:53:59.485557 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-04-13 00:53:59.485563 | orchestrator | Monday 13 April 2026 00:51:41 +0000 (0:00:01.596) 0:03:47.495 ********** 2026-04-13 00:53:59.485569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.485574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.485580 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.485589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.485599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.485607 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.485617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.485626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.485634 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.485642 | orchestrator | 2026-04-13 00:53:59.485651 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-04-13 00:53:59.485659 | orchestrator | Monday 13 April 2026 00:51:42 +0000 (0:00:01.572) 0:03:49.068 ********** 2026-04-13 00:53:59.485667 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:59.485675 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:59.485683 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:59.485692 | orchestrator | 2026-04-13 00:53:59.485701 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-04-13 00:53:59.485709 | orchestrator | Monday 13 April 2026 00:51:44 +0000 (0:00:01.608) 0:03:50.677 ********** 2026-04-13 00:53:59.485714 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:59.485726 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:59.485731 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:59.485736 | orchestrator | 2026-04-13 00:53:59.485742 | orchestrator | TASK [include_role : placement] ************************************************ 2026-04-13 00:53:59.485747 | orchestrator | Monday 13 April 2026 00:51:46 +0000 (0:00:02.277) 0:03:52.954 ********** 2026-04-13 00:53:59.485753 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:53:59.485758 | orchestrator | 2026-04-13 00:53:59.485764 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-04-13 00:53:59.485769 | orchestrator | Monday 13 April 2026 00:51:47 +0000 (0:00:01.205) 0:03:54.160 ********** 2026-04-13 00:53:59.485776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2024.2/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-13 00:53:59.485788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2024.2/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-13 00:53:59.485799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2024.2/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-13 00:53:59.485805 | orchestrator | 2026-04-13 00:53:59.485811 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-04-13 00:53:59.485820 | orchestrator | Monday 13 April 2026 00:51:51 +0000 (0:00:03.508) 0:03:57.668 ********** 2026-04-13 00:53:59.485826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2024.2/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-13 00:53:59.485832 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.485842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2024.2/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-13 00:53:59.485848 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.485857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2024.2/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-13 00:53:59.485863 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.485869 | orchestrator | 2026-04-13 00:53:59.485874 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-04-13 00:53:59.485879 | orchestrator | Monday 13 April 2026 00:51:52 +0000 (0:00:01.115) 0:03:58.784 ********** 2026-04-13 00:53:59.485885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-13 00:53:59.485896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-13 00:53:59.485903 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.485908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-13 00:53:59.485914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-13 00:53:59.485920 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.485926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-13 00:53:59.485931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-13 00:53:59.485937 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.485942 | orchestrator | 2026-04-13 00:53:59.485948 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-04-13 00:53:59.485954 | orchestrator | Monday 13 April 2026 00:51:53 +0000 (0:00:00.803) 0:03:59.587 ********** 2026-04-13 00:53:59.485959 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:59.485965 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:59.485970 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:59.485976 | orchestrator | 2026-04-13 00:53:59.485981 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-04-13 00:53:59.485987 | orchestrator | Monday 13 April 2026 00:51:54 +0000 (0:00:01.395) 0:04:00.983 ********** 2026-04-13 00:53:59.485992 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:59.485998 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:59.486003 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:59.486008 | orchestrator | 2026-04-13 00:53:59.486014 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-04-13 00:53:59.486043 | orchestrator | Monday 13 April 2026 00:51:56 +0000 (0:00:02.156) 0:04:03.140 ********** 2026-04-13 00:53:59.486049 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:53:59.486055 | orchestrator | 2026-04-13 00:53:59.486061 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-04-13 00:53:59.486071 | orchestrator | Monday 13 April 2026 00:51:58 +0000 (0:00:01.512) 0:04:04.652 ********** 2026-04-13 00:53:59.486077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:53:59.486088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:53:59.486155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:53:59.486188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:53:59.486196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.486210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.486217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:53:59.486223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.486229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.486245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:53:59.486258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.486264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.486270 | orchestrator | 2026-04-13 00:53:59.486277 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-04-13 00:53:59.486282 | orchestrator | Monday 13 April 2026 00:52:03 +0000 (0:00:05.258) 0:04:09.910 ********** 2026-04-13 00:53:59.486288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:53:59.486299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:53:59.486308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.486318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.486324 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.486330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:53:59.486336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:53:59.486350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.486374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.486387 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.486397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:53:59.486407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:53:59.486418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.486433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.486450 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.486457 | orchestrator | 2026-04-13 00:53:59.486462 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-04-13 00:53:59.486468 | orchestrator | Monday 13 April 2026 00:52:04 +0000 (0:00:00.881) 0:04:10.792 ********** 2026-04-13 00:53:59.486474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.486527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.486534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.486540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.486546 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.486551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.486557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.486563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.486569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.486574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.486580 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.486585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.486591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.486596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.486606 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.486611 | orchestrator | 2026-04-13 00:53:59.486617 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-04-13 00:53:59.486622 | orchestrator | Monday 13 April 2026 00:52:06 +0000 (0:00:01.740) 0:04:12.533 ********** 2026-04-13 00:53:59.486628 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:59.486633 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:59.486639 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:59.486644 | orchestrator | 2026-04-13 00:53:59.486654 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-04-13 00:53:59.486659 | orchestrator | Monday 13 April 2026 00:52:07 +0000 (0:00:01.422) 0:04:13.955 ********** 2026-04-13 00:53:59.486665 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:59.486670 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:59.486676 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:59.486681 | orchestrator | 2026-04-13 00:53:59.486686 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-04-13 00:53:59.486692 | orchestrator | Monday 13 April 2026 00:52:09 +0000 (0:00:02.434) 0:04:16.389 ********** 2026-04-13 00:53:59.486697 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:53:59.486703 | orchestrator | 2026-04-13 00:53:59.486709 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-04-13 00:53:59.486714 | orchestrator | Monday 13 April 2026 00:52:11 +0000 (0:00:01.355) 0:04:17.745 ********** 2026-04-13 00:53:59.486719 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-1, testbed-node-0, testbed-node-2 => (item=nova-novncproxy) 2026-04-13 00:53:59.486725 | orchestrator | 2026-04-13 00:53:59.486731 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-04-13 00:53:59.486736 | orchestrator | Monday 13 April 2026 00:52:12 +0000 (0:00:01.336) 0:04:19.082 ********** 2026-04-13 00:53:59.486745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-13 00:53:59.486752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-13 00:53:59.486758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-13 00:53:59.486763 | orchestrator | 2026-04-13 00:53:59.486769 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-04-13 00:53:59.486775 | orchestrator | Monday 13 April 2026 00:52:16 +0000 (0:00:04.225) 0:04:23.308 ********** 2026-04-13 00:53:59.486781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-13 00:53:59.486790 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.486796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-13 00:53:59.486801 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.486810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-13 00:53:59.486816 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.486822 | orchestrator | 2026-04-13 00:53:59.486827 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-04-13 00:53:59.486833 | orchestrator | Monday 13 April 2026 00:52:18 +0000 (0:00:01.381) 0:04:24.689 ********** 2026-04-13 00:53:59.486839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-13 00:53:59.486845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-13 00:53:59.486850 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.486862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-13 00:53:59.486868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-13 00:53:59.486874 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.486879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-13 00:53:59.486885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-13 00:53:59.486890 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.486895 | orchestrator | 2026-04-13 00:53:59.486901 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-13 00:53:59.486906 | orchestrator | Monday 13 April 2026 00:52:19 +0000 (0:00:01.710) 0:04:26.400 ********** 2026-04-13 00:53:59.486916 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:59.486922 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:59.486927 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:59.486933 | orchestrator | 2026-04-13 00:53:59.486937 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-13 00:53:59.486942 | orchestrator | Monday 13 April 2026 00:52:22 +0000 (0:00:02.578) 0:04:28.978 ********** 2026-04-13 00:53:59.486947 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:59.486952 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:59.486957 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:59.486962 | orchestrator | 2026-04-13 00:53:59.486966 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-04-13 00:53:59.486971 | orchestrator | Monday 13 April 2026 00:52:25 +0000 (0:00:03.093) 0:04:32.072 ********** 2026-04-13 00:53:59.486976 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-04-13 00:53:59.486981 | orchestrator | 2026-04-13 00:53:59.486986 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-04-13 00:53:59.486991 | orchestrator | Monday 13 April 2026 00:52:26 +0000 (0:00:00.851) 0:04:32.924 ********** 2026-04-13 00:53:59.486996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-13 00:53:59.487002 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.487007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-13 00:53:59.487016 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.487021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-13 00:53:59.487026 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.487031 | orchestrator | 2026-04-13 00:53:59.487036 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-04-13 00:53:59.487041 | orchestrator | Monday 13 April 2026 00:52:27 +0000 (0:00:01.345) 0:04:34.269 ********** 2026-04-13 00:53:59.487049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-13 00:53:59.487058 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.487063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-13 00:53:59.487068 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.487073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-13 00:53:59.487078 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.487083 | orchestrator | 2026-04-13 00:53:59.487088 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-04-13 00:53:59.487093 | orchestrator | Monday 13 April 2026 00:52:29 +0000 (0:00:01.425) 0:04:35.695 ********** 2026-04-13 00:53:59.487098 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.487103 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.487107 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.487112 | orchestrator | 2026-04-13 00:53:59.487118 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-13 00:53:59.487123 | orchestrator | Monday 13 April 2026 00:52:30 +0000 (0:00:01.501) 0:04:37.197 ********** 2026-04-13 00:53:59.487128 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:53:59.487133 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:53:59.487137 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:53:59.487142 | orchestrator | 2026-04-13 00:53:59.487147 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-13 00:53:59.487152 | orchestrator | Monday 13 April 2026 00:52:33 +0000 (0:00:02.702) 0:04:39.899 ********** 2026-04-13 00:53:59.487157 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:53:59.487161 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:53:59.487166 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:53:59.487171 | orchestrator | 2026-04-13 00:53:59.487176 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-04-13 00:53:59.487181 | orchestrator | Monday 13 April 2026 00:52:36 +0000 (0:00:03.051) 0:04:42.951 ********** 2026-04-13 00:53:59.487186 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-04-13 00:53:59.487190 | orchestrator | 2026-04-13 00:53:59.487195 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-04-13 00:53:59.487200 | orchestrator | Monday 13 April 2026 00:52:37 +0000 (0:00:01.184) 0:04:44.135 ********** 2026-04-13 00:53:59.487208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-13 00:53:59.487213 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.487218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-13 00:53:59.487227 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.487234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-13 00:53:59.487240 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.487245 | orchestrator | 2026-04-13 00:53:59.487250 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-04-13 00:53:59.487255 | orchestrator | Monday 13 April 2026 00:52:38 +0000 (0:00:01.069) 0:04:45.204 ********** 2026-04-13 00:53:59.487260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-13 00:53:59.487265 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.487270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-13 00:53:59.487274 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.487279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-13 00:53:59.487284 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.487289 | orchestrator | 2026-04-13 00:53:59.487294 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-04-13 00:53:59.487299 | orchestrator | Monday 13 April 2026 00:52:40 +0000 (0:00:01.569) 0:04:46.774 ********** 2026-04-13 00:53:59.487303 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.487308 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.487313 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.487318 | orchestrator | 2026-04-13 00:53:59.487323 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-13 00:53:59.487328 | orchestrator | Monday 13 April 2026 00:52:41 +0000 (0:00:01.620) 0:04:48.394 ********** 2026-04-13 00:53:59.487333 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:53:59.487337 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:53:59.487345 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:53:59.487350 | orchestrator | 2026-04-13 00:53:59.487355 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-13 00:53:59.487360 | orchestrator | Monday 13 April 2026 00:52:44 +0000 (0:00:02.598) 0:04:50.993 ********** 2026-04-13 00:53:59.487365 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:53:59.487370 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:53:59.487377 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:53:59.487382 | orchestrator | 2026-04-13 00:53:59.487387 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-04-13 00:53:59.487392 | orchestrator | Monday 13 April 2026 00:52:47 +0000 (0:00:03.252) 0:04:54.245 ********** 2026-04-13 00:53:59.487396 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:53:59.487402 | orchestrator | 2026-04-13 00:53:59.487406 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-04-13 00:53:59.487411 | orchestrator | Monday 13 April 2026 00:52:49 +0000 (0:00:01.280) 0:04:55.526 ********** 2026-04-13 00:53:59.487420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-13 00:53:59.487426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-13 00:53:59.487431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-13 00:53:59.487436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-13 00:53:59.487445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.487455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-13 00:53:59.487463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-13 00:53:59.487468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-13 00:53:59.487473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-13 00:53:59.487478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.487502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-13 00:53:59.487508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-13 00:53:59.487516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-13 00:53:59.487521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-13 00:53:59.487526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.487531 | orchestrator | 2026-04-13 00:53:59.487536 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-04-13 00:53:59.487541 | orchestrator | Monday 13 April 2026 00:52:52 +0000 (0:00:03.893) 0:04:59.420 ********** 2026-04-13 00:53:59.487546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-13 00:53:59.487557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-13 00:53:59.487563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-13 00:53:59.487571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-13 00:53:59.487576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.487581 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.487586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-13 00:53:59.487596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-13 00:53:59.487604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-13 00:53:59.487614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-13 00:53:59.487619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.487624 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.487630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-13 00:53:59.487638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-13 00:53:59.487644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-13 00:53:59.487653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-13 00:53:59.487661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-13 00:53:59.487666 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.487671 | orchestrator | 2026-04-13 00:53:59.487676 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-04-13 00:53:59.487681 | orchestrator | Monday 13 April 2026 00:52:53 +0000 (0:00:01.031) 0:05:00.452 ********** 2026-04-13 00:53:59.487686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-13 00:53:59.487691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-13 00:53:59.487696 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.487701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-13 00:53:59.487706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-13 00:53:59.487711 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.487719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-13 00:53:59.487725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-13 00:53:59.487730 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.487735 | orchestrator | 2026-04-13 00:53:59.487740 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-04-13 00:53:59.487745 | orchestrator | Monday 13 April 2026 00:52:54 +0000 (0:00:01.008) 0:05:01.460 ********** 2026-04-13 00:53:59.487750 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:59.487755 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:59.487760 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:59.487764 | orchestrator | 2026-04-13 00:53:59.487769 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-04-13 00:53:59.487774 | orchestrator | Monday 13 April 2026 00:52:56 +0000 (0:00:01.324) 0:05:02.785 ********** 2026-04-13 00:53:59.487779 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:59.487784 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:59.487788 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:59.487793 | orchestrator | 2026-04-13 00:53:59.487798 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-04-13 00:53:59.487802 | orchestrator | Monday 13 April 2026 00:52:58 +0000 (0:00:02.345) 0:05:05.130 ********** 2026-04-13 00:53:59.487807 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:53:59.487812 | orchestrator | 2026-04-13 00:53:59.487817 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-04-13 00:53:59.487822 | orchestrator | Monday 13 April 2026 00:53:00 +0000 (0:00:01.717) 0:05:06.848 ********** 2026-04-13 00:53:59.487831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:53:59.487840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:53:59.487845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:53:59.487854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-13 00:53:59.487863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-13 00:53:59.487872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-13 00:53:59.487881 | orchestrator | 2026-04-13 00:53:59.487886 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-04-13 00:53:59.487891 | orchestrator | Monday 13 April 2026 00:53:05 +0000 (0:00:05.390) 0:05:12.238 ********** 2026-04-13 00:53:59.487896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:53:59.487902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-13 00:53:59.487907 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.488016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:53:59.488030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-13 00:53:59.488040 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.488046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:53:59.488051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-13 00:53:59.488068 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.488074 | orchestrator | 2026-04-13 00:53:59.488079 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-04-13 00:53:59.488084 | orchestrator | Monday 13 April 2026 00:53:06 +0000 (0:00:00.644) 0:05:12.883 ********** 2026-04-13 00:53:59.488089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.488095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-13 00:53:59.488103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-13 00:53:59.488113 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.488118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.488124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-13 00:53:59.488129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-13 00:53:59.488134 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.488139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.488144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-13 00:53:59.488149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-13 00:53:59.488154 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.488159 | orchestrator | 2026-04-13 00:53:59.488164 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-04-13 00:53:59.488169 | orchestrator | Monday 13 April 2026 00:53:07 +0000 (0:00:01.286) 0:05:14.170 ********** 2026-04-13 00:53:59.488173 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.488178 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.488183 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.488188 | orchestrator | 2026-04-13 00:53:59.488193 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-04-13 00:53:59.488198 | orchestrator | Monday 13 April 2026 00:53:08 +0000 (0:00:00.449) 0:05:14.619 ********** 2026-04-13 00:53:59.488207 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.488215 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.488224 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.488232 | orchestrator | 2026-04-13 00:53:59.488240 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-04-13 00:53:59.488247 | orchestrator | Monday 13 April 2026 00:53:09 +0000 (0:00:01.342) 0:05:15.961 ********** 2026-04-13 00:53:59.488254 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:53:59.488262 | orchestrator | 2026-04-13 00:53:59.488270 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-04-13 00:53:59.488278 | orchestrator | Monday 13 April 2026 00:53:11 +0000 (0:00:01.873) 0:05:17.835 ********** 2026-04-13 00:53:59.488305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-13 00:53:59.488325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 00:53:59.488335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-13 00:53:59.488343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 00:53:59.488351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:53:59.488376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:53:59.488392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:53:59.488408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 00:53:59.488416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:53:59.488424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 00:53:59.488432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-13 00:53:59.488443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 00:53:59.488478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:53:59.488504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:53:59.488513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 00:53:59.488519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:53:59.488525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-13 00:53:59.488544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:53:59.488561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:53:59.488567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-13 00:53:59.488572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:53:59.488577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:53:59.488582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-13 00:53:59.488591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:53:59.488609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-13 00:53:59.488618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:53:59.488625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-13 00:53:59.488631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:53:59.488637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:53:59.488648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-13 00:53:59.488653 | orchestrator | 2026-04-13 00:53:59.488671 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-04-13 00:53:59.488677 | orchestrator | Monday 13 April 2026 00:53:15 +0000 (0:00:04.331) 0:05:22.167 ********** 2026-04-13 00:53:59.488686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-13 00:53:59.488692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 00:53:59.488698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:53:59.488704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:53:59.488709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 00:53:59.488731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:53:59.488741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-13 00:53:59.488748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:53:59.488753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:53:59.488759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-13 00:53:59.488768 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.488775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-13 00:53:59.488793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 00:53:59.488802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:53:59.488808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:53:59.488814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 00:53:59.488821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:53:59.488834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-13 00:53:59.488840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:53:59.488848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:53:59.488854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-13 00:53:59.488860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-13 00:53:59.488871 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.488877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 00:53:59.488883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:53:59.488894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:53:59.488900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 00:53:59.488908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:53:59.488915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-13 00:53:59.488924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:53:59.488930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 00:53:59.488939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-13 00:53:59.488945 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.488951 | orchestrator | 2026-04-13 00:53:59.488956 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-04-13 00:53:59.488962 | orchestrator | Monday 13 April 2026 00:53:16 +0000 (0:00:00.919) 0:05:23.087 ********** 2026-04-13 00:53:59.488971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-13 00:53:59.488977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-13 00:53:59.488984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.488989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.488998 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.489004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-13 00:53:59.489009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-13 00:53:59.489014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.489019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.489024 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.489029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-13 00:53:59.489038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-13 00:53:59.489043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.489048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-13 00:53:59.489053 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.489058 | orchestrator | 2026-04-13 00:53:59.489063 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-04-13 00:53:59.489068 | orchestrator | Monday 13 April 2026 00:53:17 +0000 (0:00:01.306) 0:05:24.393 ********** 2026-04-13 00:53:59.489073 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.489078 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.489083 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.489088 | orchestrator | 2026-04-13 00:53:59.489093 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-04-13 00:53:59.489098 | orchestrator | Monday 13 April 2026 00:53:18 +0000 (0:00:00.469) 0:05:24.862 ********** 2026-04-13 00:53:59.489106 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.489111 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.489116 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.489121 | orchestrator | 2026-04-13 00:53:59.489144 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-04-13 00:53:59.489150 | orchestrator | Monday 13 April 2026 00:53:19 +0000 (0:00:01.376) 0:05:26.239 ********** 2026-04-13 00:53:59.489154 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:53:59.489160 | orchestrator | 2026-04-13 00:53:59.489165 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-04-13 00:53:59.489169 | orchestrator | Monday 13 April 2026 00:53:21 +0000 (0:00:01.431) 0:05:27.671 ********** 2026-04-13 00:53:59.489175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-13 00:53:59.489180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-13 00:53:59.489189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-13 00:53:59.489195 | orchestrator | 2026-04-13 00:53:59.489200 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-04-13 00:53:59.489211 | orchestrator | Monday 13 April 2026 00:53:24 +0000 (0:00:03.003) 0:05:30.674 ********** 2026-04-13 00:53:59.489216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-13 00:53:59.489222 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.489227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-13 00:53:59.489232 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.489237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-13 00:53:59.489249 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.489257 | orchestrator | 2026-04-13 00:53:59.489264 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-04-13 00:53:59.489272 | orchestrator | Monday 13 April 2026 00:53:24 +0000 (0:00:00.455) 0:05:31.129 ********** 2026-04-13 00:53:59.489280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-13 00:53:59.489290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-13 00:53:59.489299 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.489314 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.489322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-13 00:53:59.489330 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.489338 | orchestrator | 2026-04-13 00:53:59.489346 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-04-13 00:53:59.489358 | orchestrator | Monday 13 April 2026 00:53:25 +0000 (0:00:00.700) 0:05:31.830 ********** 2026-04-13 00:53:59.489366 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.489374 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.489381 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.489391 | orchestrator | 2026-04-13 00:53:59.489401 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-04-13 00:53:59.489409 | orchestrator | Monday 13 April 2026 00:53:25 +0000 (0:00:00.492) 0:05:32.323 ********** 2026-04-13 00:53:59.489417 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.489424 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.489432 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.489440 | orchestrator | 2026-04-13 00:53:59.489449 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-04-13 00:53:59.489458 | orchestrator | Monday 13 April 2026 00:53:27 +0000 (0:00:01.455) 0:05:33.778 ********** 2026-04-13 00:53:59.489466 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:53:59.489475 | orchestrator | 2026-04-13 00:53:59.489480 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-04-13 00:53:59.489501 | orchestrator | Monday 13 April 2026 00:53:29 +0000 (0:00:01.781) 0:05:35.559 ********** 2026-04-13 00:53:59.489507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-13 00:53:59.489514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-13 00:53:59.489525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-13 00:53:59.489540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-13 00:53:59.489545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-13 00:53:59.489551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-13 00:53:59.489560 | orchestrator | 2026-04-13 00:53:59.489565 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-04-13 00:53:59.489573 | orchestrator | Monday 13 April 2026 00:53:35 +0000 (0:00:06.576) 0:05:42.136 ********** 2026-04-13 00:53:59.489581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-13 00:53:59.489587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-13 00:53:59.489592 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.489597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-13 00:53:59.489606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-13 00:53:59.489618 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.489626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-13 00:53:59.489634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-13 00:53:59.489643 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.489650 | orchestrator | 2026-04-13 00:53:59.489657 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-04-13 00:53:59.489664 | orchestrator | Monday 13 April 2026 00:53:36 +0000 (0:00:01.147) 0:05:43.283 ********** 2026-04-13 00:53:59.489672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-13 00:53:59.489680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-13 00:53:59.489689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-13 00:53:59.489703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-13 00:53:59.489710 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.489715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-13 00:53:59.489724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-13 00:53:59.489729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-13 00:53:59.489735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-13 00:53:59.489740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-13 00:53:59.489748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-13 00:53:59.489753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-13 00:53:59.489759 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.489764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-13 00:53:59.489769 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.489773 | orchestrator | 2026-04-13 00:53:59.489778 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-04-13 00:53:59.489783 | orchestrator | Monday 13 April 2026 00:53:37 +0000 (0:00:01.166) 0:05:44.450 ********** 2026-04-13 00:53:59.489788 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:59.489793 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:59.489798 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:59.489803 | orchestrator | 2026-04-13 00:53:59.489808 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-04-13 00:53:59.489812 | orchestrator | Monday 13 April 2026 00:53:39 +0000 (0:00:01.193) 0:05:45.643 ********** 2026-04-13 00:53:59.489817 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:53:59.489822 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:53:59.489827 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:53:59.489832 | orchestrator | 2026-04-13 00:53:59.489836 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-04-13 00:53:59.489842 | orchestrator | Monday 13 April 2026 00:53:41 +0000 (0:00:02.026) 0:05:47.669 ********** 2026-04-13 00:53:59.489850 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.489856 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.489860 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.489865 | orchestrator | 2026-04-13 00:53:59.489870 | orchestrator | TASK [include_role : trove] **************************************************** 2026-04-13 00:53:59.489875 | orchestrator | Monday 13 April 2026 00:53:41 +0000 (0:00:00.317) 0:05:47.986 ********** 2026-04-13 00:53:59.489880 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.489884 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.489889 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.489894 | orchestrator | 2026-04-13 00:53:59.489899 | orchestrator | TASK [include_role : venus] **************************************************** 2026-04-13 00:53:59.489904 | orchestrator | Monday 13 April 2026 00:53:42 +0000 (0:00:00.643) 0:05:48.630 ********** 2026-04-13 00:53:59.489909 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.489914 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.489918 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.489923 | orchestrator | 2026-04-13 00:53:59.489928 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-04-13 00:53:59.489933 | orchestrator | Monday 13 April 2026 00:53:42 +0000 (0:00:00.388) 0:05:49.018 ********** 2026-04-13 00:53:59.489938 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.489943 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.489948 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.489953 | orchestrator | 2026-04-13 00:53:59.489957 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-04-13 00:53:59.489962 | orchestrator | Monday 13 April 2026 00:53:42 +0000 (0:00:00.320) 0:05:49.339 ********** 2026-04-13 00:53:59.489967 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.489972 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.489977 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.489982 | orchestrator | 2026-04-13 00:53:59.489986 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-04-13 00:53:59.489991 | orchestrator | Monday 13 April 2026 00:53:43 +0000 (0:00:00.312) 0:05:49.651 ********** 2026-04-13 00:53:59.489996 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:53:59.490001 | orchestrator | 2026-04-13 00:53:59.490006 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-04-13 00:53:59.490014 | orchestrator | Monday 13 April 2026 00:53:44 +0000 (0:00:01.810) 0:05:51.461 ********** 2026-04-13 00:53:59.490056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-13 00:53:59.490065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-13 00:53:59.490071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-13 00:53:59.490080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-13 00:53:59.490086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-13 00:53:59.490091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-13 00:53:59.490100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-13 00:53:59.490108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-13 00:53:59.490114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-13 00:53:59.490123 | orchestrator | 2026-04-13 00:53:59.490128 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-04-13 00:53:59.490133 | orchestrator | Monday 13 April 2026 00:53:47 +0000 (0:00:02.154) 0:05:53.616 ********** 2026-04-13 00:53:59.490138 | orchestrator | changed: [testbed-node-0] => { 2026-04-13 00:53:59.490143 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:53:59.490148 | orchestrator | } 2026-04-13 00:53:59.490153 | orchestrator | changed: [testbed-node-1] => { 2026-04-13 00:53:59.490158 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:53:59.490163 | orchestrator | } 2026-04-13 00:53:59.490168 | orchestrator | changed: [testbed-node-2] => { 2026-04-13 00:53:59.490172 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:53:59.490177 | orchestrator | } 2026-04-13 00:53:59.490182 | orchestrator | 2026-04-13 00:53:59.490187 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-13 00:53:59.490192 | orchestrator | Monday 13 April 2026 00:53:47 +0000 (0:00:00.376) 0:05:53.992 ********** 2026-04-13 00:53:59.490197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-13 00:53:59.490202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:53:59.490208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:53:59.490213 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:53:59.490222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-13 00:53:59.490234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:53:59.490251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:53:59.490260 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:53:59.490268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-13 00:53:59.490277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-13 00:53:59.490284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-13 00:53:59.490289 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:53:59.490294 | orchestrator | 2026-04-13 00:53:59.490299 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-04-13 00:53:59.490303 | orchestrator | Monday 13 April 2026 00:53:49 +0000 (0:00:02.100) 0:05:56.093 ********** 2026-04-13 00:53:59.490309 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:53:59.490313 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:53:59.490318 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:53:59.490323 | orchestrator | 2026-04-13 00:53:59.490328 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-04-13 00:53:59.490336 | orchestrator | Monday 13 April 2026 00:53:50 +0000 (0:00:00.624) 0:05:56.717 ********** 2026-04-13 00:53:59.490342 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:53:59.490350 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:53:59.490358 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:53:59.490372 | orchestrator | 2026-04-13 00:53:59.490379 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-04-13 00:53:59.490387 | orchestrator | Monday 13 April 2026 00:53:51 +0000 (0:00:00.786) 0:05:57.504 ********** 2026-04-13 00:53:59.490395 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:53:59.490402 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:53:59.490410 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:53:59.490417 | orchestrator | 2026-04-13 00:53:59.490425 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-04-13 00:53:59.490432 | orchestrator | Monday 13 April 2026 00:53:51 +0000 (0:00:00.886) 0:05:58.391 ********** 2026-04-13 00:53:59.490441 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:53:59.490446 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:53:59.490451 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:53:59.490456 | orchestrator | 2026-04-13 00:53:59.490461 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-04-13 00:53:59.490465 | orchestrator | Monday 13 April 2026 00:53:52 +0000 (0:00:00.836) 0:05:59.228 ********** 2026-04-13 00:53:59.490470 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:53:59.490475 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:53:59.490479 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:53:59.490521 | orchestrator | 2026-04-13 00:53:59.490531 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-04-13 00:53:59.490536 | orchestrator | Monday 13 April 2026 00:53:53 +0000 (0:00:00.904) 0:06:00.132 ********** 2026-04-13 00:53:59.490542 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fhaproxy\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload__lphlpje/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload__lphlpje/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload__lphlpje/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload__lphlpje/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fhaproxy: Internal Server Error (\"unknown: repository kolla/release/2024.2/haproxy not found\")\\n'"} 2026-04-13 00:53:59.490557 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fhaproxy\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_5i2wb480/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_5i2wb480/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_5i2wb480/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_5i2wb480/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fhaproxy: Internal Server Error (\"unknown: repository kolla/release/2024.2/haproxy not found\")\\n'"} 2026-04-13 00:53:59.490573 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fhaproxy\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_jolpep3b/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_jolpep3b/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_jolpep3b/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_jolpep3b/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fhaproxy: Internal Server Error (\"unknown: repository kolla/release/2024.2/haproxy not found\")\\n'"} 2026-04-13 00:53:59.490585 | orchestrator | 2026-04-13 00:53:59.490590 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:53:59.490595 | orchestrator | testbed-node-0 : ok=120  changed=76  unreachable=0 failed=1  skipped=88  rescued=0 ignored=0 2026-04-13 00:53:59.490600 | orchestrator | testbed-node-1 : ok=119  changed=76  unreachable=0 failed=1  skipped=88  rescued=0 ignored=0 2026-04-13 00:53:59.490608 | orchestrator | testbed-node-2 : ok=119  changed=76  unreachable=0 failed=1  skipped=88  rescued=0 ignored=0 2026-04-13 00:53:59.490613 | orchestrator | 2026-04-13 00:53:59.490618 | orchestrator | 2026-04-13 00:53:59.490622 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:53:59.490627 | orchestrator | Monday 13 April 2026 00:53:56 +0000 (0:00:03.143) 0:06:03.276 ********** 2026-04-13 00:53:59.490632 | orchestrator | =============================================================================== 2026-04-13 00:53:59.490637 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.58s 2026-04-13 00:53:59.490642 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 5.83s 2026-04-13 00:53:59.490646 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.39s 2026-04-13 00:53:59.490651 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.29s 2026-04-13 00:53:59.490656 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.26s 2026-04-13 00:53:59.490660 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 5.20s 2026-04-13 00:53:59.490665 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.92s 2026-04-13 00:53:59.490670 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.86s 2026-04-13 00:53:59.490675 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.57s 2026-04-13 00:53:59.490680 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.56s 2026-04-13 00:53:59.490684 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.36s 2026-04-13 00:53:59.490689 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.33s 2026-04-13 00:53:59.490694 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 4.24s 2026-04-13 00:53:59.490699 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 4.23s 2026-04-13 00:53:59.490703 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.23s 2026-04-13 00:53:59.490708 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.20s 2026-04-13 00:53:59.490713 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.16s 2026-04-13 00:53:59.490718 | orchestrator | loadbalancer : Copying over custom haproxy services configuration ------- 4.03s 2026-04-13 00:53:59.490723 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.89s 2026-04-13 00:53:59.490731 | orchestrator | haproxy-config : Copying over ceph-rgw haproxy config ------------------- 3.78s 2026-04-13 00:54:02.514320 | orchestrator | 2026-04-13 00:54:02 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:54:02.514439 | orchestrator | 2026-04-13 00:54:02 | INFO  | Task 6d35cdd8-bc5e-43b0-b502-fd7eea008f05 is in state STARTED 2026-04-13 00:54:02.514763 | orchestrator | 2026-04-13 00:54:02 | INFO  | Task 35725425-f91c-4a23-a9a1-d14c4e2bbd28 is in state STARTED 2026-04-13 00:54:02.514847 | orchestrator | 2026-04-13 00:54:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:54:05.573907 | orchestrator | 2026-04-13 00:54:05 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:54:05.574772 | orchestrator | 2026-04-13 00:54:05 | INFO  | Task 6d35cdd8-bc5e-43b0-b502-fd7eea008f05 is in state STARTED 2026-04-13 00:54:05.576724 | orchestrator | 2026-04-13 00:54:05 | INFO  | Task 35725425-f91c-4a23-a9a1-d14c4e2bbd28 is in state STARTED 2026-04-13 00:54:05.576779 | orchestrator | 2026-04-13 00:54:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:54:08.608885 | orchestrator | 2026-04-13 00:54:08 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:54:08.610869 | orchestrator | 2026-04-13 00:54:08 | INFO  | Task 6d35cdd8-bc5e-43b0-b502-fd7eea008f05 is in state STARTED 2026-04-13 00:54:08.610942 | orchestrator | 2026-04-13 00:54:08 | INFO  | Task 35725425-f91c-4a23-a9a1-d14c4e2bbd28 is in state STARTED 2026-04-13 00:54:08.610963 | orchestrator | 2026-04-13 00:54:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:54:11.638962 | orchestrator | 2026-04-13 00:54:11 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:54:11.639176 | orchestrator | 2026-04-13 00:54:11 | INFO  | Task 6d35cdd8-bc5e-43b0-b502-fd7eea008f05 is in state STARTED 2026-04-13 00:54:11.641575 | orchestrator | 2026-04-13 00:54:11 | INFO  | Task 35725425-f91c-4a23-a9a1-d14c4e2bbd28 is in state STARTED 2026-04-13 00:54:11.641625 | orchestrator | 2026-04-13 00:54:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:54:14.675787 | orchestrator | 2026-04-13 00:54:14 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:54:14.676839 | orchestrator | 2026-04-13 00:54:14 | INFO  | Task 6d35cdd8-bc5e-43b0-b502-fd7eea008f05 is in state STARTED 2026-04-13 00:54:14.678401 | orchestrator | 2026-04-13 00:54:14 | INFO  | Task 35725425-f91c-4a23-a9a1-d14c4e2bbd28 is in state STARTED 2026-04-13 00:54:14.678457 | orchestrator | 2026-04-13 00:54:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:54:17.715958 | orchestrator | 2026-04-13 00:54:17 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:54:17.717256 | orchestrator | 2026-04-13 00:54:17 | INFO  | Task 6d35cdd8-bc5e-43b0-b502-fd7eea008f05 is in state STARTED 2026-04-13 00:54:17.718291 | orchestrator | 2026-04-13 00:54:17 | INFO  | Task 35725425-f91c-4a23-a9a1-d14c4e2bbd28 is in state STARTED 2026-04-13 00:54:17.718345 | orchestrator | 2026-04-13 00:54:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:54:20.766445 | orchestrator | 2026-04-13 00:54:20 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:54:20.767611 | orchestrator | 2026-04-13 00:54:20 | INFO  | Task 6d35cdd8-bc5e-43b0-b502-fd7eea008f05 is in state STARTED 2026-04-13 00:54:20.769072 | orchestrator | 2026-04-13 00:54:20 | INFO  | Task 35725425-f91c-4a23-a9a1-d14c4e2bbd28 is in state STARTED 2026-04-13 00:54:20.769136 | orchestrator | 2026-04-13 00:54:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:54:23.804134 | orchestrator | 2026-04-13 00:54:23 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:54:23.804597 | orchestrator | 2026-04-13 00:54:23 | INFO  | Task 6d35cdd8-bc5e-43b0-b502-fd7eea008f05 is in state STARTED 2026-04-13 00:54:23.805547 | orchestrator | 2026-04-13 00:54:23 | INFO  | Task 35725425-f91c-4a23-a9a1-d14c4e2bbd28 is in state STARTED 2026-04-13 00:54:23.805623 | orchestrator | 2026-04-13 00:54:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:54:26.852967 | orchestrator | 2026-04-13 00:54:26 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:54:26.858199 | orchestrator | 2026-04-13 00:54:26.858302 | orchestrator | 2026-04-13 00:54:26.858330 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 00:54:26.858351 | orchestrator | 2026-04-13 00:54:26.858888 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 00:54:26.858913 | orchestrator | Monday 13 April 2026 00:54:01 +0000 (0:00:00.334) 0:00:00.334 ********** 2026-04-13 00:54:26.858924 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:54:26.858937 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:54:26.858948 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:54:26.858959 | orchestrator | 2026-04-13 00:54:26.858970 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 00:54:26.858981 | orchestrator | Monday 13 April 2026 00:54:01 +0000 (0:00:00.290) 0:00:00.624 ********** 2026-04-13 00:54:26.858992 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-04-13 00:54:26.859003 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-04-13 00:54:26.859014 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-04-13 00:54:26.859024 | orchestrator | 2026-04-13 00:54:26.859036 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-04-13 00:54:26.859046 | orchestrator | 2026-04-13 00:54:26.859058 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-13 00:54:26.859069 | orchestrator | Monday 13 April 2026 00:54:01 +0000 (0:00:00.289) 0:00:00.914 ********** 2026-04-13 00:54:26.859080 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:54:26.859091 | orchestrator | 2026-04-13 00:54:26.859102 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-04-13 00:54:26.859113 | orchestrator | Monday 13 April 2026 00:54:02 +0000 (0:00:00.647) 0:00:01.561 ********** 2026-04-13 00:54:26.859124 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-13 00:54:26.859134 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-13 00:54:26.859145 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-13 00:54:26.859155 | orchestrator | 2026-04-13 00:54:26.859166 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-04-13 00:54:26.859176 | orchestrator | Monday 13 April 2026 00:54:03 +0000 (0:00:01.029) 0:00:02.591 ********** 2026-04-13 00:54:26.859208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:54:26.859277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:54:26.859310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:54:26.859325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-13 00:54:26.859346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-13 00:54:26.859369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-13 00:54:26.859381 | orchestrator | 2026-04-13 00:54:26.859393 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-13 00:54:26.859404 | orchestrator | Monday 13 April 2026 00:54:04 +0000 (0:00:01.402) 0:00:03.994 ********** 2026-04-13 00:54:26.859424 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:54:26.859435 | orchestrator | 2026-04-13 00:54:26.859446 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-04-13 00:54:26.859457 | orchestrator | Monday 13 April 2026 00:54:05 +0000 (0:00:00.517) 0:00:04.511 ********** 2026-04-13 00:54:26.859468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:54:26.859508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:54:26.859528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:54:26.859553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-13 00:54:26.859576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-13 00:54:26.859590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-13 00:54:26.859610 | orchestrator | 2026-04-13 00:54:26.859622 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-04-13 00:54:26.859639 | orchestrator | Monday 13 April 2026 00:54:08 +0000 (0:00:02.701) 0:00:07.213 ********** 2026-04-13 00:54:26.859650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:54:26.859670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-13 00:54:26.859682 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:54:26.859694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:54:26.859706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:54:26.859731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-13 00:54:26.859743 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:54:26.859762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-13 00:54:26.859774 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:54:26.859785 | orchestrator | 2026-04-13 00:54:26.859797 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-04-13 00:54:26.859808 | orchestrator | Monday 13 April 2026 00:54:08 +0000 (0:00:00.823) 0:00:08.036 ********** 2026-04-13 00:54:26.859819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:54:26.859844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:54:26.859856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-13 00:54:26.859868 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:54:26.859888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-13 00:54:26.859901 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:54:26.859913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:54:26.859937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-13 00:54:26.859949 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:54:26.859960 | orchestrator | 2026-04-13 00:54:26.859971 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-04-13 00:54:26.859982 | orchestrator | Monday 13 April 2026 00:54:10 +0000 (0:00:01.171) 0:00:09.208 ********** 2026-04-13 00:54:26.860001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:54:26.860013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:54:26.860038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:54:26.860055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-13 00:54:26.860076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-13 00:54:26.860090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-13 00:54:26.860123 | orchestrator | 2026-04-13 00:54:26.860134 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-04-13 00:54:26.860145 | orchestrator | Monday 13 April 2026 00:54:13 +0000 (0:00:02.991) 0:00:12.199 ********** 2026-04-13 00:54:26.860156 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:54:26.860167 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:54:26.860177 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:54:26.860188 | orchestrator | 2026-04-13 00:54:26.860199 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-04-13 00:54:26.860210 | orchestrator | Monday 13 April 2026 00:54:15 +0000 (0:00:02.340) 0:00:14.540 ********** 2026-04-13 00:54:26.860220 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:54:26.860231 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:54:26.860242 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:54:26.860253 | orchestrator | 2026-04-13 00:54:26.860263 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-04-13 00:54:26.860274 | orchestrator | Monday 13 April 2026 00:54:16 +0000 (0:00:01.488) 0:00:16.028 ********** 2026-04-13 00:54:26.860290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:54:26.860303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:54:26.860322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 00:54:26.860342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-13 00:54:26.860360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-13 00:54:26.860379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-13 00:54:26.860392 | orchestrator | 2026-04-13 00:54:26.860403 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-04-13 00:54:26.860421 | orchestrator | Monday 13 April 2026 00:54:19 +0000 (0:00:02.251) 0:00:18.279 ********** 2026-04-13 00:54:26.860432 | orchestrator | changed: [testbed-node-0] => { 2026-04-13 00:54:26.860443 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:54:26.860455 | orchestrator | } 2026-04-13 00:54:26.860499 | orchestrator | changed: [testbed-node-1] => { 2026-04-13 00:54:26.860519 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:54:26.860537 | orchestrator | } 2026-04-13 00:54:26.860555 | orchestrator | changed: [testbed-node-2] => { 2026-04-13 00:54:26.860572 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:54:26.860589 | orchestrator | } 2026-04-13 00:54:26.860607 | orchestrator | 2026-04-13 00:54:26.860625 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-13 00:54:26.860643 | orchestrator | Monday 13 April 2026 00:54:19 +0000 (0:00:00.513) 0:00:18.793 ********** 2026-04-13 00:54:26.860661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:54:26.860691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-13 00:54:26.860712 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:54:26.860731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:54:26.860779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-13 00:54:26.860799 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:54:26.860818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 00:54:26.860840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-13 00:54:26.860853 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:54:26.860863 | orchestrator | 2026-04-13 00:54:26.860874 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-13 00:54:26.860885 | orchestrator | Monday 13 April 2026 00:54:20 +0000 (0:00:00.852) 0:00:19.645 ********** 2026-04-13 00:54:26.860896 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:54:26.860906 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:54:26.860917 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:54:26.860927 | orchestrator | 2026-04-13 00:54:26.860938 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-13 00:54:26.860956 | orchestrator | Monday 13 April 2026 00:54:20 +0000 (0:00:00.270) 0:00:19.916 ********** 2026-04-13 00:54:26.860967 | orchestrator | 2026-04-13 00:54:26.860978 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-13 00:54:26.860988 | orchestrator | Monday 13 April 2026 00:54:20 +0000 (0:00:00.073) 0:00:19.989 ********** 2026-04-13 00:54:26.860999 | orchestrator | 2026-04-13 00:54:26.861009 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-13 00:54:26.861020 | orchestrator | Monday 13 April 2026 00:54:20 +0000 (0:00:00.060) 0:00:20.050 ********** 2026-04-13 00:54:26.861030 | orchestrator | 2026-04-13 00:54:26.861041 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-04-13 00:54:26.861058 | orchestrator | Monday 13 April 2026 00:54:21 +0000 (0:00:00.075) 0:00:20.126 ********** 2026-04-13 00:54:26.861069 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:54:26.861080 | orchestrator | 2026-04-13 00:54:26.861090 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-04-13 00:54:26.861101 | orchestrator | Monday 13 April 2026 00:54:21 +0000 (0:00:00.643) 0:00:20.770 ********** 2026-04-13 00:54:26.861112 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:54:26.861122 | orchestrator | 2026-04-13 00:54:26.861133 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-04-13 00:54:26.861144 | orchestrator | Monday 13 April 2026 00:54:21 +0000 (0:00:00.204) 0:00:20.974 ********** 2026-04-13 00:54:26.861162 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopensearch\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload__72v0gzb/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload__72v0gzb/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload__72v0gzb/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload__72v0gzb/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopensearch: Internal Server Error (\"unknown: repository kolla/release/2024.2/opensearch not found\")\\n'"} 2026-04-13 00:54:26.861191 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopensearch\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_xyichmi4/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_xyichmi4/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_xyichmi4/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_xyichmi4/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopensearch: Internal Server Error (\"unknown: repository kolla/release/2024.2/opensearch not found\")\\n'"} 2026-04-13 00:54:26.861212 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopensearch\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload__t_bknll/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload__t_bknll/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload__t_bknll/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload__t_bknll/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopensearch: Internal Server Error (\"unknown: repository kolla/release/2024.2/opensearch not found\")\\n'"} 2026-04-13 00:54:26.861230 | orchestrator | 2026-04-13 00:54:26.861241 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:54:26.861258 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-13 00:54:26.861271 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-13 00:54:26.861282 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-13 00:54:26.861293 | orchestrator | 2026-04-13 00:54:26.861304 | orchestrator | 2026-04-13 00:54:26.861314 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:54:26.861325 | orchestrator | Monday 13 April 2026 00:54:25 +0000 (0:00:03.285) 0:00:24.260 ********** 2026-04-13 00:54:26.861336 | orchestrator | =============================================================================== 2026-04-13 00:54:26.861346 | orchestrator | opensearch : Restart opensearch container ------------------------------- 3.29s 2026-04-13 00:54:26.861357 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.99s 2026-04-13 00:54:26.861367 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.70s 2026-04-13 00:54:26.861378 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.34s 2026-04-13 00:54:26.861389 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 2.25s 2026-04-13 00:54:26.861399 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.49s 2026-04-13 00:54:26.861410 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.40s 2026-04-13 00:54:26.861421 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.17s 2026-04-13 00:54:26.861431 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.03s 2026-04-13 00:54:26.861442 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.85s 2026-04-13 00:54:26.861452 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.82s 2026-04-13 00:54:26.861463 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.65s 2026-04-13 00:54:26.861510 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 0.64s 2026-04-13 00:54:26.861522 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2026-04-13 00:54:26.861532 | orchestrator | service-check-containers : opensearch | Notify handlers to restart containers --- 0.51s 2026-04-13 00:54:26.861543 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2026-04-13 00:54:26.861554 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.29s 2026-04-13 00:54:26.861565 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.27s 2026-04-13 00:54:26.861587 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.21s 2026-04-13 00:54:26.861598 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.20s 2026-04-13 00:54:26.861609 | orchestrator | 2026-04-13 00:54:26 | INFO  | Task 6d35cdd8-bc5e-43b0-b502-fd7eea008f05 is in state SUCCESS 2026-04-13 00:54:26.861620 | orchestrator | 2026-04-13 00:54:26 | INFO  | Task 35725425-f91c-4a23-a9a1-d14c4e2bbd28 is in state STARTED 2026-04-13 00:54:26.861637 | orchestrator | 2026-04-13 00:54:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:54:29.908640 | orchestrator | 2026-04-13 00:54:29 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:54:29.911437 | orchestrator | 2026-04-13 00:54:29 | INFO  | Task 35725425-f91c-4a23-a9a1-d14c4e2bbd28 is in state STARTED 2026-04-13 00:54:29.911506 | orchestrator | 2026-04-13 00:54:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:54:32.975664 | orchestrator | 2026-04-13 00:54:32 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:54:32.976646 | orchestrator | 2026-04-13 00:54:32 | INFO  | Task 35725425-f91c-4a23-a9a1-d14c4e2bbd28 is in state STARTED 2026-04-13 00:54:32.976684 | orchestrator | 2026-04-13 00:54:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:54:36.020550 | orchestrator | 2026-04-13 00:54:36 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:54:36.023237 | orchestrator | 2026-04-13 00:54:36 | INFO  | Task 35725425-f91c-4a23-a9a1-d14c4e2bbd28 is in state STARTED 2026-04-13 00:54:36.023316 | orchestrator | 2026-04-13 00:54:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:54:39.074344 | orchestrator | 2026-04-13 00:54:39 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:54:39.074453 | orchestrator | 2026-04-13 00:54:39 | INFO  | Task 35725425-f91c-4a23-a9a1-d14c4e2bbd28 is in state STARTED 2026-04-13 00:54:39.074500 | orchestrator | 2026-04-13 00:54:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:54:42.113261 | orchestrator | 2026-04-13 00:54:42 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:54:42.113697 | orchestrator | 2026-04-13 00:54:42 | INFO  | Task 35725425-f91c-4a23-a9a1-d14c4e2bbd28 is in state STARTED 2026-04-13 00:54:42.113727 | orchestrator | 2026-04-13 00:54:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:54:45.156748 | orchestrator | 2026-04-13 00:54:45 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:54:45.159825 | orchestrator | 2026-04-13 00:54:45 | INFO  | Task 35725425-f91c-4a23-a9a1-d14c4e2bbd28 is in state STARTED 2026-04-13 00:54:45.160413 | orchestrator | 2026-04-13 00:54:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:54:48.204015 | orchestrator | 2026-04-13 00:54:48 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:54:48.206875 | orchestrator | 2026-04-13 00:54:48 | INFO  | Task 35725425-f91c-4a23-a9a1-d14c4e2bbd28 is in state STARTED 2026-04-13 00:54:48.206931 | orchestrator | 2026-04-13 00:54:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:54:51.246672 | orchestrator | 2026-04-13 00:54:51 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:54:51.248327 | orchestrator | 2026-04-13 00:54:51 | INFO  | Task 35725425-f91c-4a23-a9a1-d14c4e2bbd28 is in state STARTED 2026-04-13 00:54:51.249338 | orchestrator | 2026-04-13 00:54:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:54:54.292086 | orchestrator | 2026-04-13 00:54:54 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:54:54.293642 | orchestrator | 2026-04-13 00:54:54 | INFO  | Task 35725425-f91c-4a23-a9a1-d14c4e2bbd28 is in state STARTED 2026-04-13 00:54:54.294062 | orchestrator | 2026-04-13 00:54:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:54:57.349522 | orchestrator | 2026-04-13 00:54:57 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:54:57.350562 | orchestrator | 2026-04-13 00:54:57 | INFO  | Task 35725425-f91c-4a23-a9a1-d14c4e2bbd28 is in state STARTED 2026-04-13 00:54:57.350591 | orchestrator | 2026-04-13 00:54:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:00.394740 | orchestrator | 2026-04-13 00:55:00 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:55:00.400395 | orchestrator | 2026-04-13 00:55:00 | INFO  | Task 35725425-f91c-4a23-a9a1-d14c4e2bbd28 is in state STARTED 2026-04-13 00:55:00.400493 | orchestrator | 2026-04-13 00:55:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:03.452606 | orchestrator | 2026-04-13 00:55:03 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:55:03.454874 | orchestrator | 2026-04-13 00:55:03 | INFO  | Task 35725425-f91c-4a23-a9a1-d14c4e2bbd28 is in state STARTED 2026-04-13 00:55:03.454948 | orchestrator | 2026-04-13 00:55:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:06.511046 | orchestrator | 2026-04-13 00:55:06 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:55:06.511156 | orchestrator | 2026-04-13 00:55:06 | INFO  | Task 35725425-f91c-4a23-a9a1-d14c4e2bbd28 is in state STARTED 2026-04-13 00:55:06.511172 | orchestrator | 2026-04-13 00:55:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:09.566613 | orchestrator | 2026-04-13 00:55:09 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:55:09.569271 | orchestrator | 2026-04-13 00:55:09 | INFO  | Task 35725425-f91c-4a23-a9a1-d14c4e2bbd28 is in state STARTED 2026-04-13 00:55:09.569365 | orchestrator | 2026-04-13 00:55:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:12.618141 | orchestrator | 2026-04-13 00:55:12 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:55:12.619289 | orchestrator | 2026-04-13 00:55:12 | INFO  | Task 35725425-f91c-4a23-a9a1-d14c4e2bbd28 is in state STARTED 2026-04-13 00:55:12.619902 | orchestrator | 2026-04-13 00:55:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:15.673520 | orchestrator | 2026-04-13 00:55:15 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:55:15.676439 | orchestrator | 2026-04-13 00:55:15 | INFO  | Task 35725425-f91c-4a23-a9a1-d14c4e2bbd28 is in state STARTED 2026-04-13 00:55:15.676555 | orchestrator | 2026-04-13 00:55:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:18.747860 | orchestrator | 2026-04-13 00:55:18 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:55:18.750646 | orchestrator | 2026-04-13 00:55:18 | INFO  | Task 35725425-f91c-4a23-a9a1-d14c4e2bbd28 is in state STARTED 2026-04-13 00:55:18.750696 | orchestrator | 2026-04-13 00:55:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:21.797826 | orchestrator | 2026-04-13 00:55:21 | INFO  | Task b24ffa2d-cee6-4e84-ace7-1972bd00a4da is in state STARTED 2026-04-13 00:55:21.800351 | orchestrator | 2026-04-13 00:55:21 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:55:21.804618 | orchestrator | 2026-04-13 00:55:21 | INFO  | Task 35725425-f91c-4a23-a9a1-d14c4e2bbd28 is in state SUCCESS 2026-04-13 00:55:21.806080 | orchestrator | 2026-04-13 00:55:21.806133 | orchestrator | 2026-04-13 00:55:21.806148 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-04-13 00:55:21.806167 | orchestrator | 2026-04-13 00:55:21.806187 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-13 00:55:21.806208 | orchestrator | Monday 13 April 2026 00:54:00 +0000 (0:00:00.110) 0:00:00.110 ********** 2026-04-13 00:55:21.806227 | orchestrator | ok: [localhost] => { 2026-04-13 00:55:21.806247 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-04-13 00:55:21.806718 | orchestrator | } 2026-04-13 00:55:21.806739 | orchestrator | 2026-04-13 00:55:21.806750 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-04-13 00:55:21.806761 | orchestrator | Monday 13 April 2026 00:54:00 +0000 (0:00:00.049) 0:00:00.159 ********** 2026-04-13 00:55:21.806773 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-04-13 00:55:21.806785 | orchestrator | ...ignoring 2026-04-13 00:55:21.806796 | orchestrator | 2026-04-13 00:55:21.806807 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-04-13 00:55:21.806818 | orchestrator | Monday 13 April 2026 00:54:03 +0000 (0:00:03.052) 0:00:03.212 ********** 2026-04-13 00:55:21.806835 | orchestrator | skipping: [localhost] 2026-04-13 00:55:21.806855 | orchestrator | 2026-04-13 00:55:21.806873 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-04-13 00:55:21.806891 | orchestrator | Monday 13 April 2026 00:54:03 +0000 (0:00:00.072) 0:00:03.284 ********** 2026-04-13 00:55:21.806909 | orchestrator | ok: [localhost] 2026-04-13 00:55:21.806926 | orchestrator | 2026-04-13 00:55:21.806943 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 00:55:21.806961 | orchestrator | 2026-04-13 00:55:21.806979 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 00:55:21.806997 | orchestrator | Monday 13 April 2026 00:54:04 +0000 (0:00:00.224) 0:00:03.509 ********** 2026-04-13 00:55:21.807016 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:55:21.807034 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:55:21.807052 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:55:21.807071 | orchestrator | 2026-04-13 00:55:21.807090 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 00:55:21.807105 | orchestrator | Monday 13 April 2026 00:54:04 +0000 (0:00:00.342) 0:00:03.852 ********** 2026-04-13 00:55:21.807116 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-13 00:55:21.807128 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-13 00:55:21.807138 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-13 00:55:21.807149 | orchestrator | 2026-04-13 00:55:21.807160 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-13 00:55:21.807171 | orchestrator | 2026-04-13 00:55:21.807181 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-13 00:55:21.807192 | orchestrator | Monday 13 April 2026 00:54:04 +0000 (0:00:00.452) 0:00:04.304 ********** 2026-04-13 00:55:21.807220 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-13 00:55:21.807232 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-13 00:55:21.807243 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-13 00:55:21.807254 | orchestrator | 2026-04-13 00:55:21.807265 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-13 00:55:21.807276 | orchestrator | Monday 13 April 2026 00:54:05 +0000 (0:00:00.349) 0:00:04.653 ********** 2026-04-13 00:55:21.807287 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:55:21.807299 | orchestrator | 2026-04-13 00:55:21.807309 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-04-13 00:55:21.807338 | orchestrator | Monday 13 April 2026 00:54:06 +0000 (0:00:00.685) 0:00:05.338 ********** 2026-04-13 00:55:21.807423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-13 00:55:21.807493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-13 00:55:21.807564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-13 00:55:21.807603 | orchestrator | 2026-04-13 00:55:21.807624 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-04-13 00:55:21.807645 | orchestrator | Monday 13 April 2026 00:54:09 +0000 (0:00:03.230) 0:00:08.568 ********** 2026-04-13 00:55:21.807669 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.807690 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.807709 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:55:21.807728 | orchestrator | 2026-04-13 00:55:21.807748 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-04-13 00:55:21.807767 | orchestrator | Monday 13 April 2026 00:54:09 +0000 (0:00:00.741) 0:00:09.310 ********** 2026-04-13 00:55:21.807786 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.807804 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.807823 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:55:21.807841 | orchestrator | 2026-04-13 00:55:21.807859 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-04-13 00:55:21.807877 | orchestrator | Monday 13 April 2026 00:54:11 +0000 (0:00:01.494) 0:00:10.804 ********** 2026-04-13 00:55:21.807909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-13 00:55:21.807969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-13 00:55:21.807991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-13 00:55:21.808012 | orchestrator | 2026-04-13 00:55:21.808023 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-04-13 00:55:21.808034 | orchestrator | Monday 13 April 2026 00:54:15 +0000 (0:00:03.538) 0:00:14.343 ********** 2026-04-13 00:55:21.808045 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.808056 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.808067 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:55:21.808078 | orchestrator | 2026-04-13 00:55:21.808089 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-04-13 00:55:21.808100 | orchestrator | Monday 13 April 2026 00:54:16 +0000 (0:00:01.108) 0:00:15.452 ********** 2026-04-13 00:55:21.808110 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:55:21.808121 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:55:21.808132 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:55:21.808292 | orchestrator | 2026-04-13 00:55:21.808303 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-13 00:55:21.808314 | orchestrator | Monday 13 April 2026 00:54:20 +0000 (0:00:03.998) 0:00:19.450 ********** 2026-04-13 00:55:21.808325 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:55:21.808336 | orchestrator | 2026-04-13 00:55:21.808347 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-13 00:55:21.808358 | orchestrator | Monday 13 April 2026 00:54:20 +0000 (0:00:00.512) 0:00:19.964 ********** 2026-04-13 00:55:21.808386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:55:21.808400 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.808412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:55:21.808433 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:21.808532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:55:21.808550 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.808561 | orchestrator | 2026-04-13 00:55:21.808572 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-13 00:55:21.808583 | orchestrator | Monday 13 April 2026 00:54:23 +0000 (0:00:02.891) 0:00:22.855 ********** 2026-04-13 00:55:21.808599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:55:21.808620 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:21.808640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:55:21.808659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:55:21.808678 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.808689 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.808700 | orchestrator | 2026-04-13 00:55:21.808711 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-13 00:55:21.808722 | orchestrator | Monday 13 April 2026 00:54:26 +0000 (0:00:02.595) 0:00:25.450 ********** 2026-04-13 00:55:21.808741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:55:21.808754 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:21.808772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:55:21.808791 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.808803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:55:21.808815 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.808828 | orchestrator | 2026-04-13 00:55:21.808855 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-04-13 00:55:21.808874 | orchestrator | Monday 13 April 2026 00:54:28 +0000 (0:00:02.558) 0:00:28.009 ********** 2026-04-13 00:55:21.808893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-13 00:55:21.808935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-13 00:55:21.808969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-13 00:55:21.808996 | orchestrator | 2026-04-13 00:55:21.809007 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-04-13 00:55:21.809018 | orchestrator | Monday 13 April 2026 00:54:31 +0000 (0:00:03.194) 0:00:31.203 ********** 2026-04-13 00:55:21.809029 | orchestrator | changed: [testbed-node-0] => { 2026-04-13 00:55:21.809039 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:55:21.809050 | orchestrator | } 2026-04-13 00:55:21.809061 | orchestrator | changed: [testbed-node-1] => { 2026-04-13 00:55:21.809072 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:55:21.809083 | orchestrator | } 2026-04-13 00:55:21.809093 | orchestrator | changed: [testbed-node-2] => { 2026-04-13 00:55:21.809104 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:55:21.809114 | orchestrator | } 2026-04-13 00:55:21.809125 | orchestrator | 2026-04-13 00:55:21.809136 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-13 00:55:21.809152 | orchestrator | Monday 13 April 2026 00:54:32 +0000 (0:00:00.344) 0:00:31.547 ********** 2026-04-13 00:55:21.809164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:55:21.809175 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:21.809195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:55:21.809219 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.809237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:55:21.809249 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.809260 | orchestrator | 2026-04-13 00:55:21.809271 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-04-13 00:55:21.809282 | orchestrator | Monday 13 April 2026 00:54:34 +0000 (0:00:02.289) 0:00:33.837 ********** 2026-04-13 00:55:21.809292 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:21.809310 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.809321 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.809332 | orchestrator | 2026-04-13 00:55:21.809343 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-04-13 00:55:21.809359 | orchestrator | Monday 13 April 2026 00:54:35 +0000 (0:00:00.518) 0:00:34.355 ********** 2026-04-13 00:55:21.809370 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:21.809381 | orchestrator | 2026-04-13 00:55:21.809392 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-04-13 00:55:21.809403 | orchestrator | Monday 13 April 2026 00:54:35 +0000 (0:00:00.109) 0:00:34.465 ********** 2026-04-13 00:55:21.809414 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:21.809424 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.809435 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.809476 | orchestrator | 2026-04-13 00:55:21.809496 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-04-13 00:55:21.809537 | orchestrator | Monday 13 April 2026 00:54:35 +0000 (0:00:00.346) 0:00:34.811 ********** 2026-04-13 00:55:21.809562 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:21.809574 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.809584 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.809595 | orchestrator | 2026-04-13 00:55:21.809606 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-04-13 00:55:21.809618 | orchestrator | Monday 13 April 2026 00:54:35 +0000 (0:00:00.424) 0:00:35.236 ********** 2026-04-13 00:55:21.809629 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:21.809639 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.809650 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.809660 | orchestrator | 2026-04-13 00:55:21.809671 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-04-13 00:55:21.809682 | orchestrator | Monday 13 April 2026 00:54:36 +0000 (0:00:00.311) 0:00:35.547 ********** 2026-04-13 00:55:21.809693 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:21.809703 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.809714 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.809725 | orchestrator | 2026-04-13 00:55:21.809735 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-04-13 00:55:21.809747 | orchestrator | Monday 13 April 2026 00:54:36 +0000 (0:00:00.498) 0:00:36.046 ********** 2026-04-13 00:55:21.809757 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:21.809768 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.809778 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.809789 | orchestrator | 2026-04-13 00:55:21.809800 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-04-13 00:55:21.809810 | orchestrator | Monday 13 April 2026 00:54:37 +0000 (0:00:00.337) 0:00:36.383 ********** 2026-04-13 00:55:21.809821 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:21.809838 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.809856 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.809874 | orchestrator | 2026-04-13 00:55:21.809892 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-04-13 00:55:21.809912 | orchestrator | Monday 13 April 2026 00:54:37 +0000 (0:00:00.323) 0:00:36.707 ********** 2026-04-13 00:55:21.809931 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-13 00:55:21.809944 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-13 00:55:21.809961 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-13 00:55:21.809972 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:21.809983 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-13 00:55:21.809994 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-13 00:55:21.810005 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-13 00:55:21.810069 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.810094 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-13 00:55:21.810105 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-13 00:55:21.810116 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-13 00:55:21.810127 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.810137 | orchestrator | 2026-04-13 00:55:21.810148 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-04-13 00:55:21.810159 | orchestrator | Monday 13 April 2026 00:54:37 +0000 (0:00:00.349) 0:00:37.056 ********** 2026-04-13 00:55:21.810169 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:21.810180 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.810191 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.810201 | orchestrator | 2026-04-13 00:55:21.810212 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-04-13 00:55:21.810223 | orchestrator | Monday 13 April 2026 00:54:38 +0000 (0:00:00.528) 0:00:37.585 ********** 2026-04-13 00:55:21.810233 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:21.810244 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.810254 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.810265 | orchestrator | 2026-04-13 00:55:21.810275 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-04-13 00:55:21.810286 | orchestrator | Monday 13 April 2026 00:54:38 +0000 (0:00:00.362) 0:00:37.947 ********** 2026-04-13 00:55:21.810296 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:21.810307 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.810318 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.810328 | orchestrator | 2026-04-13 00:55:21.810339 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-04-13 00:55:21.810350 | orchestrator | Monday 13 April 2026 00:54:39 +0000 (0:00:00.379) 0:00:38.327 ********** 2026-04-13 00:55:21.810360 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:21.810371 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.810381 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.810392 | orchestrator | 2026-04-13 00:55:21.810402 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-04-13 00:55:21.810413 | orchestrator | Monday 13 April 2026 00:54:39 +0000 (0:00:00.330) 0:00:38.658 ********** 2026-04-13 00:55:21.810424 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:21.810434 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.810513 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.810528 | orchestrator | 2026-04-13 00:55:21.810539 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-04-13 00:55:21.810559 | orchestrator | Monday 13 April 2026 00:54:39 +0000 (0:00:00.510) 0:00:39.168 ********** 2026-04-13 00:55:21.810571 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:21.810581 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.810592 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.810603 | orchestrator | 2026-04-13 00:55:21.810613 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-04-13 00:55:21.810623 | orchestrator | Monday 13 April 2026 00:54:40 +0000 (0:00:00.327) 0:00:39.496 ********** 2026-04-13 00:55:21.810632 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:21.810642 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.810651 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.810661 | orchestrator | 2026-04-13 00:55:21.810670 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-04-13 00:55:21.810680 | orchestrator | Monday 13 April 2026 00:54:40 +0000 (0:00:00.361) 0:00:39.857 ********** 2026-04-13 00:55:21.810689 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:21.810699 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.810708 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.810717 | orchestrator | 2026-04-13 00:55:21.810727 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-04-13 00:55:21.810757 | orchestrator | Monday 13 April 2026 00:54:40 +0000 (0:00:00.303) 0:00:40.160 ********** 2026-04-13 00:55:21.810773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:55:21.810785 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:21.810804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:55:21.810815 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.810834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:55:21.810863 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.810880 | orchestrator | 2026-04-13 00:55:21.810900 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-04-13 00:55:21.810918 | orchestrator | Monday 13 April 2026 00:54:43 +0000 (0:00:02.400) 0:00:42.560 ********** 2026-04-13 00:55:21.810936 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:21.810948 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.810958 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.810973 | orchestrator | 2026-04-13 00:55:21.810990 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-04-13 00:55:21.811006 | orchestrator | Monday 13 April 2026 00:54:43 +0000 (0:00:00.518) 0:00:43.079 ********** 2026-04-13 00:55:21.811033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:55:21.811063 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:21.811089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:55:21.811107 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.811136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-13 00:55:21.811165 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.811181 | orchestrator | 2026-04-13 00:55:21.811196 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-04-13 00:55:21.811209 | orchestrator | Monday 13 April 2026 00:54:45 +0000 (0:00:02.099) 0:00:45.179 ********** 2026-04-13 00:55:21.811223 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:21.811238 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.811252 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.811267 | orchestrator | 2026-04-13 00:55:21.811283 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-04-13 00:55:21.811299 | orchestrator | Monday 13 April 2026 00:54:46 +0000 (0:00:00.321) 0:00:45.500 ********** 2026-04-13 00:55:21.811314 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:21.811330 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.811346 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.811361 | orchestrator | 2026-04-13 00:55:21.811375 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-04-13 00:55:21.811389 | orchestrator | Monday 13 April 2026 00:54:46 +0000 (0:00:00.323) 0:00:45.824 ********** 2026-04-13 00:55:21.811407 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:21.811424 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.811464 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.811485 | orchestrator | 2026-04-13 00:55:21.811503 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-04-13 00:55:21.811521 | orchestrator | Monday 13 April 2026 00:54:47 +0000 (0:00:00.544) 0:00:46.369 ********** 2026-04-13 00:55:21.811539 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:21.811556 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.811574 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.811592 | orchestrator | 2026-04-13 00:55:21.811611 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-13 00:55:21.811629 | orchestrator | Monday 13 April 2026 00:54:47 +0000 (0:00:00.543) 0:00:46.913 ********** 2026-04-13 00:55:21.811647 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:21.811664 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.811682 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.811700 | orchestrator | 2026-04-13 00:55:21.811718 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-04-13 00:55:21.811737 | orchestrator | Monday 13 April 2026 00:54:47 +0000 (0:00:00.316) 0:00:47.230 ********** 2026-04-13 00:55:21.811754 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:55:21.811772 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:55:21.811791 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:55:21.811808 | orchestrator | 2026-04-13 00:55:21.811827 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-04-13 00:55:21.811851 | orchestrator | Monday 13 April 2026 00:54:48 +0000 (0:00:01.054) 0:00:48.284 ********** 2026-04-13 00:55:21.811868 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:55:21.811885 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:55:21.811902 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:55:21.811921 | orchestrator | 2026-04-13 00:55:21.811938 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-04-13 00:55:21.811954 | orchestrator | Monday 13 April 2026 00:54:49 +0000 (0:00:00.430) 0:00:48.714 ********** 2026-04-13 00:55:21.811970 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:55:21.811987 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:55:21.812003 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:55:21.812020 | orchestrator | 2026-04-13 00:55:21.812038 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-04-13 00:55:21.812055 | orchestrator | Monday 13 April 2026 00:54:49 +0000 (0:00:00.338) 0:00:49.052 ********** 2026-04-13 00:55:21.812072 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-04-13 00:55:21.812104 | orchestrator | ...ignoring 2026-04-13 00:55:21.812122 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-04-13 00:55:21.812141 | orchestrator | ...ignoring 2026-04-13 00:55:21.812157 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-04-13 00:55:21.812175 | orchestrator | ...ignoring 2026-04-13 00:55:21.812193 | orchestrator | 2026-04-13 00:55:21.812211 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-04-13 00:55:21.812230 | orchestrator | Monday 13 April 2026 00:55:00 +0000 (0:00:10.788) 0:00:59.841 ********** 2026-04-13 00:55:21.812249 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:55:21.812268 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:55:21.812285 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:55:21.812304 | orchestrator | 2026-04-13 00:55:21.812322 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-04-13 00:55:21.812341 | orchestrator | Monday 13 April 2026 00:55:01 +0000 (0:00:00.537) 0:01:00.379 ********** 2026-04-13 00:55:21.812359 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:21.812375 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.812393 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.812410 | orchestrator | 2026-04-13 00:55:21.812429 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-04-13 00:55:21.812471 | orchestrator | Monday 13 April 2026 00:55:01 +0000 (0:00:00.368) 0:01:00.747 ********** 2026-04-13 00:55:21.812489 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:21.812506 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.812525 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.812543 | orchestrator | 2026-04-13 00:55:21.812576 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-04-13 00:55:21.812596 | orchestrator | Monday 13 April 2026 00:55:01 +0000 (0:00:00.343) 0:01:01.091 ********** 2026-04-13 00:55:21.812615 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:21.812632 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.812651 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.812669 | orchestrator | 2026-04-13 00:55:21.812689 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-04-13 00:55:21.812708 | orchestrator | Monday 13 April 2026 00:55:02 +0000 (0:00:00.330) 0:01:01.422 ********** 2026-04-13 00:55:21.812727 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:55:21.812746 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:55:21.812764 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:55:21.812784 | orchestrator | 2026-04-13 00:55:21.812804 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-04-13 00:55:21.812822 | orchestrator | Monday 13 April 2026 00:55:02 +0000 (0:00:00.364) 0:01:01.787 ********** 2026-04-13 00:55:21.812839 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:21.812856 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.812872 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.812890 | orchestrator | 2026-04-13 00:55:21.812907 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-13 00:55:21.812925 | orchestrator | Monday 13 April 2026 00:55:03 +0000 (0:00:00.554) 0:01:02.342 ********** 2026-04-13 00:55:21.812943 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.812960 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.812976 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-04-13 00:55:21.812992 | orchestrator | 2026-04-13 00:55:21.813009 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-04-13 00:55:21.813027 | orchestrator | Monday 13 April 2026 00:55:03 +0000 (0:00:00.438) 0:01:02.780 ********** 2026-04-13 00:55:21.813061 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=10.11.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fmariadb-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_d50nm1vr/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_d50nm1vr/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_d50nm1vr/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=10.11.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fmariadb-server: Internal Server Error (\"unknown: repository kolla/release/2024.2/mariadb-server not found\")\\n'"} 2026-04-13 00:55:21.813105 | orchestrator | 2026-04-13 00:55:21.813125 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-13 00:55:21.813144 | orchestrator | Monday 13 April 2026 00:55:07 +0000 (0:00:04.313) 0:01:07.094 ********** 2026-04-13 00:55:21.813163 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.813182 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.813201 | orchestrator | 2026-04-13 00:55:21.813220 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-04-13 00:55:21.813240 | orchestrator | Monday 13 April 2026 00:55:08 +0000 (0:00:00.642) 0:01:07.736 ********** 2026-04-13 00:55:21.813257 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:21.813277 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:21.813296 | orchestrator | 2026-04-13 00:55:21.813327 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-04-13 00:55:21.813348 | orchestrator | Monday 13 April 2026 00:55:08 +0000 (0:00:00.246) 0:01:07.982 ********** 2026-04-13 00:55:21.813367 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:55:21.813386 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:55:21.813406 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-13 00:55:21.813425 | orchestrator | 2026-04-13 00:55:21.813477 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-13 00:55:21.813497 | orchestrator | skipping: no hosts matched 2026-04-13 00:55:21.813514 | orchestrator | 2026-04-13 00:55:21.813531 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-13 00:55:21.813548 | orchestrator | 2026-04-13 00:55:21.813565 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-13 00:55:21.813581 | orchestrator | Monday 13 April 2026 00:55:08 +0000 (0:00:00.234) 0:01:08.217 ********** 2026-04-13 00:55:21.813624 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=10.11.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fmariadb-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_xqr6nsn8/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_xqr6nsn8/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_xqr6nsn8/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_xqr6nsn8/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=10.11.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fmariadb-server: Internal Server Error (\"unknown: repository kolla/release/2024.2/mariadb-server not found\")\\n'"} 2026-04-13 00:55:21.813645 | orchestrator | 2026-04-13 00:55:21.813662 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:55:21.813678 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-13 00:55:21.813695 | orchestrator | testbed-node-0 : ok=20  changed=9  unreachable=0 failed=1  skipped=33  rescued=0 ignored=1  2026-04-13 00:55:21.813713 | orchestrator | testbed-node-1 : ok=16  changed=7  unreachable=0 failed=1  skipped=38  rescued=0 ignored=1  2026-04-13 00:55:21.813731 | orchestrator | testbed-node-2 : ok=16  changed=7  unreachable=0 failed=0 skipped=38  rescued=0 ignored=1  2026-04-13 00:55:21.813748 | orchestrator | 2026-04-13 00:55:21.813766 | orchestrator | 2026-04-13 00:55:21.813776 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:55:21.813786 | orchestrator | Monday 13 April 2026 00:55:19 +0000 (0:00:10.787) 0:01:19.005 ********** 2026-04-13 00:55:21.813806 | orchestrator | =============================================================================== 2026-04-13 00:55:21.813817 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.79s 2026-04-13 00:55:21.813840 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 10.79s 2026-04-13 00:55:21.813857 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 4.31s 2026-04-13 00:55:21.813872 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.00s 2026-04-13 00:55:21.813888 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.54s 2026-04-13 00:55:21.813905 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.23s 2026-04-13 00:55:21.813922 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 3.19s 2026-04-13 00:55:21.813939 | orchestrator | Check MariaDB service --------------------------------------------------- 3.05s 2026-04-13 00:55:21.813955 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.89s 2026-04-13 00:55:21.813971 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.60s 2026-04-13 00:55:21.813981 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.56s 2026-04-13 00:55:21.813991 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 2.40s 2026-04-13 00:55:21.814001 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.29s 2026-04-13 00:55:21.814010 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 2.10s 2026-04-13 00:55:21.814056 | orchestrator | mariadb : Copying over my.cnf for mariabackup --------------------------- 1.49s 2026-04-13 00:55:21.814069 | orchestrator | mariadb : Copying over config.json files for mariabackup ---------------- 1.11s 2026-04-13 00:55:21.814078 | orchestrator | mariadb : Create MariaDB volume ----------------------------------------- 1.05s 2026-04-13 00:55:21.814088 | orchestrator | mariadb : Ensuring database backup config directory exists -------------- 0.74s 2026-04-13 00:55:21.814098 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.69s 2026-04-13 00:55:21.814107 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.64s 2026-04-13 00:55:21.814117 | orchestrator | 2026-04-13 00:55:21 | INFO  | Task 019fe496-a835-4f75-ae04-dfb258fc1825 is in state STARTED 2026-04-13 00:55:21.814126 | orchestrator | 2026-04-13 00:55:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:24.859515 | orchestrator | 2026-04-13 00:55:24 | INFO  | Task b24ffa2d-cee6-4e84-ace7-1972bd00a4da is in state STARTED 2026-04-13 00:55:24.860426 | orchestrator | 2026-04-13 00:55:24 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:55:24.861086 | orchestrator | 2026-04-13 00:55:24 | INFO  | Task 019fe496-a835-4f75-ae04-dfb258fc1825 is in state STARTED 2026-04-13 00:55:24.861103 | orchestrator | 2026-04-13 00:55:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:27.904375 | orchestrator | 2026-04-13 00:55:27 | INFO  | Task b24ffa2d-cee6-4e84-ace7-1972bd00a4da is in state STARTED 2026-04-13 00:55:27.906920 | orchestrator | 2026-04-13 00:55:27 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:55:27.910265 | orchestrator | 2026-04-13 00:55:27 | INFO  | Task 019fe496-a835-4f75-ae04-dfb258fc1825 is in state STARTED 2026-04-13 00:55:27.910857 | orchestrator | 2026-04-13 00:55:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:30.955176 | orchestrator | 2026-04-13 00:55:30 | INFO  | Task b24ffa2d-cee6-4e84-ace7-1972bd00a4da is in state STARTED 2026-04-13 00:55:30.955537 | orchestrator | 2026-04-13 00:55:30 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:55:30.956563 | orchestrator | 2026-04-13 00:55:30 | INFO  | Task 019fe496-a835-4f75-ae04-dfb258fc1825 is in state STARTED 2026-04-13 00:55:30.957921 | orchestrator | 2026-04-13 00:55:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:34.011947 | orchestrator | 2026-04-13 00:55:34 | INFO  | Task b24ffa2d-cee6-4e84-ace7-1972bd00a4da is in state STARTED 2026-04-13 00:55:34.014643 | orchestrator | 2026-04-13 00:55:34 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:55:34.015093 | orchestrator | 2026-04-13 00:55:34 | INFO  | Task 019fe496-a835-4f75-ae04-dfb258fc1825 is in state STARTED 2026-04-13 00:55:34.015127 | orchestrator | 2026-04-13 00:55:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:37.061120 | orchestrator | 2026-04-13 00:55:37 | INFO  | Task b24ffa2d-cee6-4e84-ace7-1972bd00a4da is in state STARTED 2026-04-13 00:55:37.061271 | orchestrator | 2026-04-13 00:55:37 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:55:37.062639 | orchestrator | 2026-04-13 00:55:37 | INFO  | Task 019fe496-a835-4f75-ae04-dfb258fc1825 is in state STARTED 2026-04-13 00:55:37.062707 | orchestrator | 2026-04-13 00:55:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:40.098391 | orchestrator | 2026-04-13 00:55:40 | INFO  | Task b24ffa2d-cee6-4e84-ace7-1972bd00a4da is in state STARTED 2026-04-13 00:55:40.098756 | orchestrator | 2026-04-13 00:55:40 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:55:40.099822 | orchestrator | 2026-04-13 00:55:40 | INFO  | Task 019fe496-a835-4f75-ae04-dfb258fc1825 is in state STARTED 2026-04-13 00:55:40.099933 | orchestrator | 2026-04-13 00:55:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:43.147325 | orchestrator | 2026-04-13 00:55:43 | INFO  | Task b24ffa2d-cee6-4e84-ace7-1972bd00a4da is in state STARTED 2026-04-13 00:55:43.147402 | orchestrator | 2026-04-13 00:55:43 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:55:43.148001 | orchestrator | 2026-04-13 00:55:43 | INFO  | Task 019fe496-a835-4f75-ae04-dfb258fc1825 is in state STARTED 2026-04-13 00:55:43.148014 | orchestrator | 2026-04-13 00:55:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:46.185142 | orchestrator | 2026-04-13 00:55:46 | INFO  | Task b24ffa2d-cee6-4e84-ace7-1972bd00a4da is in state STARTED 2026-04-13 00:55:46.186903 | orchestrator | 2026-04-13 00:55:46 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:55:46.191668 | orchestrator | 2026-04-13 00:55:46 | INFO  | Task 019fe496-a835-4f75-ae04-dfb258fc1825 is in state STARTED 2026-04-13 00:55:46.192342 | orchestrator | 2026-04-13 00:55:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:49.224018 | orchestrator | 2026-04-13 00:55:49 | INFO  | Task b24ffa2d-cee6-4e84-ace7-1972bd00a4da is in state STARTED 2026-04-13 00:55:49.224786 | orchestrator | 2026-04-13 00:55:49 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:55:49.226220 | orchestrator | 2026-04-13 00:55:49 | INFO  | Task 019fe496-a835-4f75-ae04-dfb258fc1825 is in state STARTED 2026-04-13 00:55:49.226303 | orchestrator | 2026-04-13 00:55:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:52.259350 | orchestrator | 2026-04-13 00:55:52 | INFO  | Task b24ffa2d-cee6-4e84-ace7-1972bd00a4da is in state STARTED 2026-04-13 00:55:52.261023 | orchestrator | 2026-04-13 00:55:52 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:55:52.262844 | orchestrator | 2026-04-13 00:55:52 | INFO  | Task 019fe496-a835-4f75-ae04-dfb258fc1825 is in state STARTED 2026-04-13 00:55:52.262877 | orchestrator | 2026-04-13 00:55:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:55.303131 | orchestrator | 2026-04-13 00:55:55 | INFO  | Task b24ffa2d-cee6-4e84-ace7-1972bd00a4da is in state STARTED 2026-04-13 00:55:55.303781 | orchestrator | 2026-04-13 00:55:55 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:55:55.304983 | orchestrator | 2026-04-13 00:55:55 | INFO  | Task 019fe496-a835-4f75-ae04-dfb258fc1825 is in state STARTED 2026-04-13 00:55:55.305036 | orchestrator | 2026-04-13 00:55:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:58.348062 | orchestrator | 2026-04-13 00:55:58 | INFO  | Task b24ffa2d-cee6-4e84-ace7-1972bd00a4da is in state STARTED 2026-04-13 00:55:58.349028 | orchestrator | 2026-04-13 00:55:58 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:55:58.350747 | orchestrator | 2026-04-13 00:55:58 | INFO  | Task 019fe496-a835-4f75-ae04-dfb258fc1825 is in state SUCCESS 2026-04-13 00:55:58.353155 | orchestrator | 2026-04-13 00:55:58.353324 | orchestrator | 2026-04-13 00:55:58.353340 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 00:55:58.353355 | orchestrator | 2026-04-13 00:55:58.353370 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 00:55:58.353384 | orchestrator | Monday 13 April 2026 00:55:23 +0000 (0:00:00.319) 0:00:00.319 ********** 2026-04-13 00:55:58.353398 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:55:58.353414 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:55:58.353779 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:55:58.353798 | orchestrator | 2026-04-13 00:55:58.353814 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 00:55:58.353829 | orchestrator | Monday 13 April 2026 00:55:23 +0000 (0:00:00.269) 0:00:00.588 ********** 2026-04-13 00:55:58.353843 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-04-13 00:55:58.353858 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-04-13 00:55:58.353873 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-04-13 00:55:58.353887 | orchestrator | 2026-04-13 00:55:58.353901 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-04-13 00:55:58.353918 | orchestrator | 2026-04-13 00:55:58.353933 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-13 00:55:58.353948 | orchestrator | Monday 13 April 2026 00:55:24 +0000 (0:00:00.356) 0:00:00.945 ********** 2026-04-13 00:55:58.353964 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:55:58.353981 | orchestrator | 2026-04-13 00:55:58.353994 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-04-13 00:55:58.354007 | orchestrator | Monday 13 April 2026 00:55:24 +0000 (0:00:00.617) 0:00:01.562 ********** 2026-04-13 00:55:58.354104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-13 00:55:58.354181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-13 00:55:58.354210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-13 00:55:58.354237 | orchestrator | 2026-04-13 00:55:58.354253 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-04-13 00:55:58.354268 | orchestrator | Monday 13 April 2026 00:55:26 +0000 (0:00:01.833) 0:00:03.396 ********** 2026-04-13 00:55:58.354282 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:55:58.354296 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:55:58.354308 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:55:58.354321 | orchestrator | 2026-04-13 00:55:58.354348 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-13 00:55:58.354362 | orchestrator | Monday 13 April 2026 00:55:26 +0000 (0:00:00.272) 0:00:03.668 ********** 2026-04-13 00:55:58.354377 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-13 00:55:58.354392 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-13 00:55:58.354406 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-04-13 00:55:58.354459 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-04-13 00:55:58.354477 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-04-13 00:55:58.354490 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-04-13 00:55:58.354504 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-04-13 00:55:58.354518 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-04-13 00:55:58.354532 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-13 00:55:58.354546 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-13 00:55:58.354560 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-04-13 00:55:58.354573 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-04-13 00:55:58.354588 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-04-13 00:55:58.354602 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-04-13 00:55:58.354616 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-04-13 00:55:58.354630 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-04-13 00:55:58.354645 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-13 00:55:58.354659 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-13 00:55:58.354686 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-04-13 00:55:58.354700 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-04-13 00:55:58.354714 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-04-13 00:55:58.354729 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-04-13 00:55:58.354743 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-04-13 00:55:58.354756 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-04-13 00:55:58.354770 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-04-13 00:55:58.354786 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-04-13 00:55:58.354799 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-04-13 00:55:58.354813 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-04-13 00:55:58.354836 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-04-13 00:55:58.354851 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-04-13 00:55:58.354865 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-04-13 00:55:58.354879 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-04-13 00:55:58.354894 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-04-13 00:55:58.354909 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-04-13 00:55:58.354923 | orchestrator | 2026-04-13 00:55:58.354937 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-13 00:55:58.354953 | orchestrator | Monday 13 April 2026 00:55:27 +0000 (0:00:00.705) 0:00:04.374 ********** 2026-04-13 00:55:58.354967 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:55:58.354982 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:55:58.355007 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:55:58.355022 | orchestrator | 2026-04-13 00:55:58.355036 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-13 00:55:58.355050 | orchestrator | Monday 13 April 2026 00:55:28 +0000 (0:00:00.472) 0:00:04.846 ********** 2026-04-13 00:55:58.355065 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:58.355080 | orchestrator | 2026-04-13 00:55:58.355095 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-13 00:55:58.355109 | orchestrator | Monday 13 April 2026 00:55:28 +0000 (0:00:00.140) 0:00:04.987 ********** 2026-04-13 00:55:58.355124 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:58.355138 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:58.355152 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:58.355166 | orchestrator | 2026-04-13 00:55:58.355181 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-13 00:55:58.355196 | orchestrator | Monday 13 April 2026 00:55:28 +0000 (0:00:00.262) 0:00:05.249 ********** 2026-04-13 00:55:58.355223 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:55:58.355237 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:55:58.355251 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:55:58.355266 | orchestrator | 2026-04-13 00:55:58.355280 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-13 00:55:58.355294 | orchestrator | Monday 13 April 2026 00:55:28 +0000 (0:00:00.309) 0:00:05.558 ********** 2026-04-13 00:55:58.355307 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:58.355321 | orchestrator | 2026-04-13 00:55:58.355335 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-13 00:55:58.355349 | orchestrator | Monday 13 April 2026 00:55:28 +0000 (0:00:00.113) 0:00:05.672 ********** 2026-04-13 00:55:58.355365 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:58.355381 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:58.355395 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:58.355411 | orchestrator | 2026-04-13 00:55:58.355453 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-13 00:55:58.355470 | orchestrator | Monday 13 April 2026 00:55:29 +0000 (0:00:00.470) 0:00:06.142 ********** 2026-04-13 00:55:58.355484 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:55:58.355499 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:55:58.355511 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:55:58.355519 | orchestrator | 2026-04-13 00:55:58.355528 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-13 00:55:58.355537 | orchestrator | Monday 13 April 2026 00:55:29 +0000 (0:00:00.338) 0:00:06.481 ********** 2026-04-13 00:55:58.355545 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:58.355555 | orchestrator | 2026-04-13 00:55:58.355569 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-13 00:55:58.355584 | orchestrator | Monday 13 April 2026 00:55:29 +0000 (0:00:00.136) 0:00:06.617 ********** 2026-04-13 00:55:58.355598 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:58.355612 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:58.355626 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:58.355639 | orchestrator | 2026-04-13 00:55:58.355654 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-13 00:55:58.355668 | orchestrator | Monday 13 April 2026 00:55:30 +0000 (0:00:00.308) 0:00:06.925 ********** 2026-04-13 00:55:58.355682 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:55:58.355697 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:55:58.355711 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:55:58.355725 | orchestrator | 2026-04-13 00:55:58.355739 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-13 00:55:58.355753 | orchestrator | Monday 13 April 2026 00:55:30 +0000 (0:00:00.308) 0:00:07.234 ********** 2026-04-13 00:55:58.355766 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:58.355782 | orchestrator | 2026-04-13 00:55:58.355797 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-13 00:55:58.355812 | orchestrator | Monday 13 April 2026 00:55:30 +0000 (0:00:00.122) 0:00:07.356 ********** 2026-04-13 00:55:58.355826 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:58.355842 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:58.355857 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:58.355872 | orchestrator | 2026-04-13 00:55:58.355887 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-13 00:55:58.355902 | orchestrator | Monday 13 April 2026 00:55:31 +0000 (0:00:00.547) 0:00:07.904 ********** 2026-04-13 00:55:58.355916 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:55:58.355930 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:55:58.355945 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:55:58.355960 | orchestrator | 2026-04-13 00:55:58.355985 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-13 00:55:58.356001 | orchestrator | Monday 13 April 2026 00:55:31 +0000 (0:00:00.316) 0:00:08.220 ********** 2026-04-13 00:55:58.356043 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:58.356058 | orchestrator | 2026-04-13 00:55:58.356073 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-13 00:55:58.356087 | orchestrator | Monday 13 April 2026 00:55:31 +0000 (0:00:00.131) 0:00:08.352 ********** 2026-04-13 00:55:58.356102 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:58.356117 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:58.356132 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:58.356146 | orchestrator | 2026-04-13 00:55:58.356161 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-13 00:55:58.356176 | orchestrator | Monday 13 April 2026 00:55:31 +0000 (0:00:00.275) 0:00:08.628 ********** 2026-04-13 00:55:58.356189 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:55:58.356203 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:55:58.356215 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:55:58.356227 | orchestrator | 2026-04-13 00:55:58.356240 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-13 00:55:58.356254 | orchestrator | Monday 13 April 2026 00:55:32 +0000 (0:00:00.342) 0:00:08.970 ********** 2026-04-13 00:55:58.356268 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:58.356283 | orchestrator | 2026-04-13 00:55:58.356297 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-13 00:55:58.356311 | orchestrator | Monday 13 April 2026 00:55:32 +0000 (0:00:00.113) 0:00:09.084 ********** 2026-04-13 00:55:58.356339 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:58.356354 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:58.356368 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:58.356382 | orchestrator | 2026-04-13 00:55:58.356395 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-13 00:55:58.356409 | orchestrator | Monday 13 April 2026 00:55:32 +0000 (0:00:00.496) 0:00:09.580 ********** 2026-04-13 00:55:58.356483 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:55:58.356502 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:55:58.356518 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:55:58.356534 | orchestrator | 2026-04-13 00:55:58.356549 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-13 00:55:58.356563 | orchestrator | Monday 13 April 2026 00:55:33 +0000 (0:00:00.381) 0:00:09.962 ********** 2026-04-13 00:55:58.356577 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:58.356592 | orchestrator | 2026-04-13 00:55:58.356605 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-13 00:55:58.356620 | orchestrator | Monday 13 April 2026 00:55:33 +0000 (0:00:00.136) 0:00:10.098 ********** 2026-04-13 00:55:58.356635 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:58.356651 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:58.356665 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:58.356680 | orchestrator | 2026-04-13 00:55:58.356694 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-13 00:55:58.356708 | orchestrator | Monday 13 April 2026 00:55:33 +0000 (0:00:00.308) 0:00:10.407 ********** 2026-04-13 00:55:58.356723 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:55:58.356738 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:55:58.356754 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:55:58.356769 | orchestrator | 2026-04-13 00:55:58.356785 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-13 00:55:58.356801 | orchestrator | Monday 13 April 2026 00:55:33 +0000 (0:00:00.317) 0:00:10.724 ********** 2026-04-13 00:55:58.356815 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:58.356828 | orchestrator | 2026-04-13 00:55:58.356841 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-13 00:55:58.356855 | orchestrator | Monday 13 April 2026 00:55:34 +0000 (0:00:00.312) 0:00:11.037 ********** 2026-04-13 00:55:58.356868 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:58.356880 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:58.356894 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:58.356924 | orchestrator | 2026-04-13 00:55:58.356938 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-13 00:55:58.356950 | orchestrator | Monday 13 April 2026 00:55:34 +0000 (0:00:00.347) 0:00:11.384 ********** 2026-04-13 00:55:58.356962 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:55:58.356976 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:55:58.356989 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:55:58.357002 | orchestrator | 2026-04-13 00:55:58.357015 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-13 00:55:58.357027 | orchestrator | Monday 13 April 2026 00:55:35 +0000 (0:00:00.477) 0:00:11.862 ********** 2026-04-13 00:55:58.357040 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:58.357053 | orchestrator | 2026-04-13 00:55:58.357067 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-13 00:55:58.357081 | orchestrator | Monday 13 April 2026 00:55:35 +0000 (0:00:00.128) 0:00:11.991 ********** 2026-04-13 00:55:58.357094 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:58.357107 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:58.357120 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:58.357134 | orchestrator | 2026-04-13 00:55:58.357146 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-13 00:55:58.357160 | orchestrator | Monday 13 April 2026 00:55:35 +0000 (0:00:00.314) 0:00:12.305 ********** 2026-04-13 00:55:58.357174 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:55:58.357186 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:55:58.357199 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:55:58.357212 | orchestrator | 2026-04-13 00:55:58.357226 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-13 00:55:58.357239 | orchestrator | Monday 13 April 2026 00:55:35 +0000 (0:00:00.474) 0:00:12.779 ********** 2026-04-13 00:55:58.357252 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:58.357262 | orchestrator | 2026-04-13 00:55:58.357270 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-13 00:55:58.357278 | orchestrator | Monday 13 April 2026 00:55:36 +0000 (0:00:00.109) 0:00:12.888 ********** 2026-04-13 00:55:58.357285 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:58.357302 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:58.357310 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:58.357318 | orchestrator | 2026-04-13 00:55:58.357326 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-04-13 00:55:58.357333 | orchestrator | Monday 13 April 2026 00:55:36 +0000 (0:00:00.318) 0:00:13.207 ********** 2026-04-13 00:55:58.357341 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:55:58.357349 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:55:58.357357 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:55:58.357364 | orchestrator | 2026-04-13 00:55:58.357372 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-04-13 00:55:58.357380 | orchestrator | Monday 13 April 2026 00:55:38 +0000 (0:00:01.642) 0:00:14.850 ********** 2026-04-13 00:55:58.357388 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-13 00:55:58.357396 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-13 00:55:58.357403 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-13 00:55:58.357411 | orchestrator | 2026-04-13 00:55:58.357419 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-04-13 00:55:58.357487 | orchestrator | Monday 13 April 2026 00:55:41 +0000 (0:00:03.259) 0:00:18.109 ********** 2026-04-13 00:55:58.357496 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-13 00:55:58.357518 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-13 00:55:58.357527 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-13 00:55:58.357546 | orchestrator | 2026-04-13 00:55:58.357553 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-04-13 00:55:58.357562 | orchestrator | Monday 13 April 2026 00:55:44 +0000 (0:00:03.336) 0:00:21.446 ********** 2026-04-13 00:55:58.357569 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-13 00:55:58.357577 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-13 00:55:58.357585 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-13 00:55:58.357593 | orchestrator | 2026-04-13 00:55:58.357601 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-04-13 00:55:58.357608 | orchestrator | Monday 13 April 2026 00:55:46 +0000 (0:00:01.651) 0:00:23.097 ********** 2026-04-13 00:55:58.357616 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:58.357624 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:58.357632 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:58.357639 | orchestrator | 2026-04-13 00:55:58.357647 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-04-13 00:55:58.357655 | orchestrator | Monday 13 April 2026 00:55:46 +0000 (0:00:00.309) 0:00:23.407 ********** 2026-04-13 00:55:58.357663 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:58.357671 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:58.357679 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:58.357686 | orchestrator | 2026-04-13 00:55:58.357694 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-13 00:55:58.357702 | orchestrator | Monday 13 April 2026 00:55:46 +0000 (0:00:00.342) 0:00:23.750 ********** 2026-04-13 00:55:58.357710 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:55:58.357718 | orchestrator | 2026-04-13 00:55:58.357726 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-04-13 00:55:58.357733 | orchestrator | Monday 13 April 2026 00:55:47 +0000 (0:00:00.792) 0:00:24.542 ********** 2026-04-13 00:55:58.357751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-13 00:55:58.357779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-13 00:55:58.357801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-13 00:55:58.357830 | orchestrator | 2026-04-13 00:55:58.357838 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-04-13 00:55:58.357847 | orchestrator | Monday 13 April 2026 00:55:49 +0000 (0:00:01.479) 0:00:26.022 ********** 2026-04-13 00:55:58.357855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-13 00:55:58.357865 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:58.357883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { p2026-04-13 00:55:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:55:58.357902 | orchestrator | ath_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-13 00:55:58.357912 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:58.357926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-13 00:55:58.357941 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:58.357949 | orchestrator | 2026-04-13 00:55:58.357957 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-04-13 00:55:58.357965 | orchestrator | Monday 13 April 2026 00:55:49 +0000 (0:00:00.708) 0:00:26.731 ********** 2026-04-13 00:55:58.357981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-13 00:55:58.357990 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:58.358003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-13 00:55:58.358057 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:58.358075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-13 00:55:58.358083 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:58.358090 | orchestrator | 2026-04-13 00:55:58.358096 | orchestrator | TASK [service-check-containers : horizon | Check containers] ******************* 2026-04-13 00:55:58.358103 | orchestrator | Monday 13 April 2026 00:55:51 +0000 (0:00:01.303) 0:00:28.034 ********** 2026-04-13 00:55:58.358120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-13 00:55:58.358139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-13 00:55:58.358157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-13 00:55:58.358171 | orchestrator | 2026-04-13 00:55:58.358178 | orchestrator | TASK [service-check-containers : horizon | Notify handlers to restart containers] *** 2026-04-13 00:55:58.358184 | orchestrator | Monday 13 April 2026 00:55:52 +0000 (0:00:01.392) 0:00:29.427 ********** 2026-04-13 00:55:58.358191 | orchestrator | changed: [testbed-node-0] => { 2026-04-13 00:55:58.358198 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:55:58.358204 | orchestrator | } 2026-04-13 00:55:58.358211 | orchestrator | changed: [testbed-node-1] => { 2026-04-13 00:55:58.358218 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:55:58.358225 | orchestrator | } 2026-04-13 00:55:58.358232 | orchestrator | changed: [testbed-node-2] => { 2026-04-13 00:55:58.358239 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:55:58.358245 | orchestrator | } 2026-04-13 00:55:58.358253 | orchestrator | 2026-04-13 00:55:58.358259 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-13 00:55:58.358266 | orchestrator | Monday 13 April 2026 00:55:52 +0000 (0:00:00.363) 0:00:29.791 ********** 2026-04-13 00:55:58.358277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-13 00:55:58.358290 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:58.358303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-13 00:55:58.358310 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:58.358322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-13 00:55:58.358334 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:58.358341 | orchestrator | 2026-04-13 00:55:58.358347 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-13 00:55:58.358358 | orchestrator | Monday 13 April 2026 00:55:54 +0000 (0:00:01.408) 0:00:31.200 ********** 2026-04-13 00:55:58.358365 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:55:58.358372 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:55:58.358379 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:55:58.358385 | orchestrator | 2026-04-13 00:55:58.358392 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-13 00:55:58.358399 | orchestrator | Monday 13 April 2026 00:55:54 +0000 (0:00:00.309) 0:00:31.509 ********** 2026-04-13 00:55:58.358406 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:55:58.358413 | orchestrator | 2026-04-13 00:55:58.358420 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-04-13 00:55:58.358441 | orchestrator | Monday 13 April 2026 00:55:55 +0000 (0:00:00.522) 0:00:32.032 ********** 2026-04-13 00:55:58.358448 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-13 00:55:58.358455 | orchestrator | 2026-04-13 00:55:58.358461 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:55:58.358469 | orchestrator | testbed-node-0 : ok=34  changed=8  unreachable=0 failed=1  skipped=26  rescued=0 ignored=0 2026-04-13 00:55:58.358477 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-13 00:55:58.358484 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-13 00:55:58.358491 | orchestrator | 2026-04-13 00:55:58.358498 | orchestrator | 2026-04-13 00:55:58.358505 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:55:58.358517 | orchestrator | Monday 13 April 2026 00:55:56 +0000 (0:00:00.830) 0:00:32.863 ********** 2026-04-13 00:55:58.358524 | orchestrator | =============================================================================== 2026-04-13 00:55:58.358531 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 3.34s 2026-04-13 00:55:58.358537 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 3.26s 2026-04-13 00:55:58.358544 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.83s 2026-04-13 00:55:58.358550 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.65s 2026-04-13 00:55:58.358557 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.64s 2026-04-13 00:55:58.358563 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.48s 2026-04-13 00:55:58.358570 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.41s 2026-04-13 00:55:58.358577 | orchestrator | service-check-containers : horizon | Check containers ------------------- 1.39s 2026-04-13 00:55:58.358584 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.30s 2026-04-13 00:55:58.358591 | orchestrator | horizon : Creating Horizon database ------------------------------------- 0.83s 2026-04-13 00:55:58.358598 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.79s 2026-04-13 00:55:58.358604 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.71s 2026-04-13 00:55:58.358611 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.71s 2026-04-13 00:55:58.358618 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.62s 2026-04-13 00:55:58.358624 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.55s 2026-04-13 00:55:58.358631 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.52s 2026-04-13 00:55:58.358638 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.50s 2026-04-13 00:55:58.358648 | orchestrator | horizon : Update policy file name --------------------------------------- 0.48s 2026-04-13 00:55:58.358655 | orchestrator | horizon : Update policy file name --------------------------------------- 0.47s 2026-04-13 00:55:58.358662 | orchestrator | horizon : Update policy file name --------------------------------------- 0.47s 2026-04-13 00:56:01.396270 | orchestrator | 2026-04-13 00:56:01 | INFO  | Task b24ffa2d-cee6-4e84-ace7-1972bd00a4da is in state STARTED 2026-04-13 00:56:01.396888 | orchestrator | 2026-04-13 00:56:01 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:56:01.396920 | orchestrator | 2026-04-13 00:56:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:04.442857 | orchestrator | 2026-04-13 00:56:04 | INFO  | Task b24ffa2d-cee6-4e84-ace7-1972bd00a4da is in state STARTED 2026-04-13 00:56:04.443276 | orchestrator | 2026-04-13 00:56:04 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:56:04.443497 | orchestrator | 2026-04-13 00:56:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:07.510982 | orchestrator | 2026-04-13 00:56:07 | INFO  | Task b24ffa2d-cee6-4e84-ace7-1972bd00a4da is in state STARTED 2026-04-13 00:56:07.511964 | orchestrator | 2026-04-13 00:56:07 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:56:07.511996 | orchestrator | 2026-04-13 00:56:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:10.547135 | orchestrator | 2026-04-13 00:56:10 | INFO  | Task b24ffa2d-cee6-4e84-ace7-1972bd00a4da is in state STARTED 2026-04-13 00:56:10.547683 | orchestrator | 2026-04-13 00:56:10 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:56:10.547706 | orchestrator | 2026-04-13 00:56:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:13.585964 | orchestrator | 2026-04-13 00:56:13 | INFO  | Task d9bd637f-dc1b-41f3-942a-452cea8e1891 is in state STARTED 2026-04-13 00:56:13.587338 | orchestrator | 2026-04-13 00:56:13 | INFO  | Task c39a518c-0be6-44a5-b28a-0f65e162fcd2 is in state STARTED 2026-04-13 00:56:13.591013 | orchestrator | 2026-04-13 00:56:13 | INFO  | Task b24ffa2d-cee6-4e84-ace7-1972bd00a4da is in state SUCCESS 2026-04-13 00:56:13.592067 | orchestrator | 2026-04-13 00:56:13.592148 | orchestrator | 2026-04-13 00:56:13.592162 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 00:56:13.592171 | orchestrator | 2026-04-13 00:56:13.592178 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 00:56:13.592185 | orchestrator | Monday 13 April 2026 00:55:23 +0000 (0:00:00.323) 0:00:00.323 ********** 2026-04-13 00:56:13.592193 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:56:13.592249 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:56:13.592258 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:56:13.592265 | orchestrator | 2026-04-13 00:56:13.592272 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 00:56:13.592279 | orchestrator | Monday 13 April 2026 00:55:23 +0000 (0:00:00.325) 0:00:00.649 ********** 2026-04-13 00:56:13.592286 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-13 00:56:13.592293 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-13 00:56:13.592300 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-13 00:56:13.592306 | orchestrator | 2026-04-13 00:56:13.592661 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-04-13 00:56:13.592678 | orchestrator | 2026-04-13 00:56:13.592685 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-13 00:56:13.592692 | orchestrator | Monday 13 April 2026 00:55:24 +0000 (0:00:00.324) 0:00:00.974 ********** 2026-04-13 00:56:13.592699 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:56:13.592707 | orchestrator | 2026-04-13 00:56:13.592714 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-04-13 00:56:13.592721 | orchestrator | Monday 13 April 2026 00:55:24 +0000 (0:00:00.713) 0:00:01.688 ********** 2026-04-13 00:56:13.592745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-13 00:56:13.592755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-13 00:56:13.592813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-13 00:56:13.592822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-13 00:56:13.592833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-13 00:56:13.592843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-13 00:56:13.592853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-13 00:56:13.592865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-13 00:56:13.592889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-13 00:56:13.592897 | orchestrator | 2026-04-13 00:56:13.592903 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-04-13 00:56:13.592910 | orchestrator | Monday 13 April 2026 00:55:27 +0000 (0:00:02.451) 0:00:04.139 ********** 2026-04-13 00:56:13.592917 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:13.592937 | orchestrator | 2026-04-13 00:56:13.592945 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-04-13 00:56:13.592951 | orchestrator | Monday 13 April 2026 00:55:27 +0000 (0:00:00.126) 0:00:04.266 ********** 2026-04-13 00:56:13.592958 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:13.592964 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:13.592971 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:13.592977 | orchestrator | 2026-04-13 00:56:13.592984 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-04-13 00:56:13.592991 | orchestrator | Monday 13 April 2026 00:55:27 +0000 (0:00:00.286) 0:00:04.552 ********** 2026-04-13 00:56:13.592998 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-13 00:56:13.593005 | orchestrator | 2026-04-13 00:56:13.593011 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-13 00:56:13.593017 | orchestrator | Monday 13 April 2026 00:55:28 +0000 (0:00:00.985) 0:00:05.537 ********** 2026-04-13 00:56:13.593023 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:56:13.593030 | orchestrator | 2026-04-13 00:56:13.593037 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-04-13 00:56:13.593044 | orchestrator | Monday 13 April 2026 00:55:29 +0000 (0:00:00.675) 0:00:06.213 ********** 2026-04-13 00:56:13.593056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-13 00:56:13.593071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-13 00:56:13.593085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-13 00:56:13.593092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-13 00:56:13.593100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-13 00:56:13.593111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-13 00:56:13.593123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-13 00:56:13.593130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-13 00:56:13.593143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-13 00:56:13.593150 | orchestrator | 2026-04-13 00:56:13.593158 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-04-13 00:56:13.593164 | orchestrator | Monday 13 April 2026 00:55:32 +0000 (0:00:03.313) 0:00:09.526 ********** 2026-04-13 00:56:13.593172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-13 00:56:13.593179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 00:56:13.593194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-13 00:56:13.593201 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:13.593208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-13 00:56:13.593219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 00:56:13.593226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-13 00:56:13.593233 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:13.593240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-13 00:56:13.593254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 00:56:13.593261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-13 00:56:13.593268 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:13.593274 | orchestrator | 2026-04-13 00:56:13.593281 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-04-13 00:56:13.593287 | orchestrator | Monday 13 April 2026 00:55:33 +0000 (0:00:00.649) 0:00:10.176 ********** 2026-04-13 00:56:13.593298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-13 00:56:13.593305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 00:56:13.593315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-13 00:56:13.593327 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:13.593339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-13 00:56:13.593348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 00:56:13.593361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-13 00:56:13.593369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-13 00:56:13.593386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 00:56:13.593393 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:13.593405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-13 00:56:13.593435 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:13.593442 | orchestrator | 2026-04-13 00:56:13.593448 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-04-13 00:56:13.593455 | orchestrator | Monday 13 April 2026 00:55:34 +0000 (0:00:00.934) 0:00:11.110 ********** 2026-04-13 00:56:13.593462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-13 00:56:13.593474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-13 00:56:13.593486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-13 00:56:13.593494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-13 00:56:13.593501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-13 00:56:13.593545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-13 00:56:13.593562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-13 00:56:13.593570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-13 00:56:13.593582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-13 00:56:13.593591 | orchestrator | 2026-04-13 00:56:13.593599 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-04-13 00:56:13.593607 | orchestrator | Monday 13 April 2026 00:55:37 +0000 (0:00:03.354) 0:00:14.465 ********** 2026-04-13 00:56:13.593618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-13 00:56:13.593625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 00:56:13.593636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-13 00:56:13.593648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 00:56:13.593659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-13 00:56:13.593669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 00:56:13.593682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-13 00:56:13.593691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-13 00:56:13.593698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-13 00:56:13.593721 | orchestrator | 2026-04-13 00:56:13.593727 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-04-13 00:56:13.593832 | orchestrator | Monday 13 April 2026 00:55:45 +0000 (0:00:07.338) 0:00:21.803 ********** 2026-04-13 00:56:13.593841 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:56:13.593849 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:56:13.593855 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:56:13.593861 | orchestrator | 2026-04-13 00:56:13.593868 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-04-13 00:56:13.593874 | orchestrator | Monday 13 April 2026 00:55:46 +0000 (0:00:01.588) 0:00:23.391 ********** 2026-04-13 00:56:13.593880 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:13.593886 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:13.593892 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:13.593898 | orchestrator | 2026-04-13 00:56:13.593905 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-04-13 00:56:13.593911 | orchestrator | Monday 13 April 2026 00:55:47 +0000 (0:00:00.789) 0:00:24.181 ********** 2026-04-13 00:56:13.593917 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:13.593923 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:13.593929 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:13.593935 | orchestrator | 2026-04-13 00:56:13.593941 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-04-13 00:56:13.593948 | orchestrator | Monday 13 April 2026 00:55:47 +0000 (0:00:00.505) 0:00:24.686 ********** 2026-04-13 00:56:13.593954 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:13.593960 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:13.593966 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:13.593974 | orchestrator | 2026-04-13 00:56:13.593980 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-04-13 00:56:13.593986 | orchestrator | Monday 13 April 2026 00:55:48 +0000 (0:00:00.304) 0:00:24.990 ********** 2026-04-13 00:56:13.594001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-13 00:56:13.594011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 00:56:13.594094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-13 00:56:13.594104 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:13.594111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-13 00:56:13.594118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 00:56:13.594130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-13 00:56:13.594137 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:13.594144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-13 00:56:13.594164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 00:56:13.594172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-13 00:56:13.594178 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:13.594185 | orchestrator | 2026-04-13 00:56:13.594192 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-13 00:56:13.594199 | orchestrator | Monday 13 April 2026 00:55:48 +0000 (0:00:00.559) 0:00:25.549 ********** 2026-04-13 00:56:13.594206 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:13.594213 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:13.594220 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:13.594227 | orchestrator | 2026-04-13 00:56:13.594234 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-04-13 00:56:13.594241 | orchestrator | Monday 13 April 2026 00:55:49 +0000 (0:00:00.296) 0:00:25.846 ********** 2026-04-13 00:56:13.594248 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-13 00:56:13.594256 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-13 00:56:13.594263 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-13 00:56:13.594270 | orchestrator | 2026-04-13 00:56:13.594277 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-04-13 00:56:13.594284 | orchestrator | Monday 13 April 2026 00:55:51 +0000 (0:00:01.968) 0:00:27.814 ********** 2026-04-13 00:56:13.594292 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-13 00:56:13.594299 | orchestrator | 2026-04-13 00:56:13.594306 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-04-13 00:56:13.594314 | orchestrator | Monday 13 April 2026 00:55:52 +0000 (0:00:01.068) 0:00:28.882 ********** 2026-04-13 00:56:13.594321 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:13.594328 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:13.594336 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:13.594343 | orchestrator | 2026-04-13 00:56:13.594354 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-04-13 00:56:13.594361 | orchestrator | Monday 13 April 2026 00:55:52 +0000 (0:00:00.546) 0:00:29.428 ********** 2026-04-13 00:56:13.594368 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-13 00:56:13.594375 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-13 00:56:13.594387 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-13 00:56:13.594395 | orchestrator | 2026-04-13 00:56:13.594403 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-04-13 00:56:13.594410 | orchestrator | Monday 13 April 2026 00:55:54 +0000 (0:00:01.641) 0:00:31.070 ********** 2026-04-13 00:56:13.594447 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:56:13.594453 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:56:13.594559 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:56:13.594569 | orchestrator | 2026-04-13 00:56:13.594577 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-04-13 00:56:13.594584 | orchestrator | Monday 13 April 2026 00:55:54 +0000 (0:00:00.329) 0:00:31.399 ********** 2026-04-13 00:56:13.594592 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-13 00:56:13.594601 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-13 00:56:13.594609 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-13 00:56:13.594617 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-13 00:56:13.594624 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-13 00:56:13.594632 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-13 00:56:13.594640 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-13 00:56:13.594649 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-13 00:56:13.594657 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-13 00:56:13.594666 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-13 00:56:13.594673 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-13 00:56:13.594689 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-13 00:56:13.594698 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-13 00:56:13.594706 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-13 00:56:13.594713 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-13 00:56:13.594721 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-13 00:56:13.594730 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-13 00:56:13.594737 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-13 00:56:13.594743 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-13 00:56:13.594750 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-13 00:56:13.594757 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-13 00:56:13.594764 | orchestrator | 2026-04-13 00:56:13.594770 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-04-13 00:56:13.594777 | orchestrator | Monday 13 April 2026 00:56:03 +0000 (0:00:09.251) 0:00:40.650 ********** 2026-04-13 00:56:13.594784 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-13 00:56:13.594790 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-13 00:56:13.594797 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-13 00:56:13.594815 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-13 00:56:13.594822 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-13 00:56:13.594829 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-13 00:56:13.594836 | orchestrator | 2026-04-13 00:56:13.594843 | orchestrator | TASK [service-check-containers : keystone | Check containers] ****************** 2026-04-13 00:56:13.594850 | orchestrator | Monday 13 April 2026 00:56:06 +0000 (0:00:02.813) 0:00:43.464 ********** 2026-04-13 00:56:13.594863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-13 00:56:13.594871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-13 00:56:13.594885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-13 00:56:13.594893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-13 00:56:13.594911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-13 00:56:13.594918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-13 00:56:13.594926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-13 00:56:13.594937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-13 00:56:13.594944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-13 00:56:13.594952 | orchestrator | 2026-04-13 00:56:13.594958 | orchestrator | TASK [service-check-containers : keystone | Notify handlers to restart containers] *** 2026-04-13 00:56:13.594965 | orchestrator | Monday 13 April 2026 00:56:09 +0000 (0:00:02.394) 0:00:45.858 ********** 2026-04-13 00:56:13.594977 | orchestrator | changed: [testbed-node-0] => { 2026-04-13 00:56:13.594985 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:56:13.594992 | orchestrator | } 2026-04-13 00:56:13.594999 | orchestrator | changed: [testbed-node-1] => { 2026-04-13 00:56:13.595006 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:56:13.595013 | orchestrator | } 2026-04-13 00:56:13.595020 | orchestrator | changed: [testbed-node-2] => { 2026-04-13 00:56:13.595027 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 00:56:13.595034 | orchestrator | } 2026-04-13 00:56:13.595039 | orchestrator | 2026-04-13 00:56:13.595045 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-13 00:56:13.595050 | orchestrator | Monday 13 April 2026 00:56:09 +0000 (0:00:00.338) 0:00:46.197 ********** 2026-04-13 00:56:13.595060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-13 00:56:13.595066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 00:56:13.595073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-13 00:56:13.595080 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:13.595094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-13 00:56:13.595107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 00:56:13.595115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-13 00:56:13.595122 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:13.595134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-13 00:56:13.595142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-13 00:56:13.595154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-13 00:56:13.595166 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:13.595173 | orchestrator | 2026-04-13 00:56:13.595180 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-13 00:56:13.595187 | orchestrator | Monday 13 April 2026 00:56:10 +0000 (0:00:01.215) 0:00:47.412 ********** 2026-04-13 00:56:13.595194 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:56:13.595201 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:56:13.595207 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:56:13.595214 | orchestrator | 2026-04-13 00:56:13.595221 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-04-13 00:56:13.595228 | orchestrator | Monday 13 April 2026 00:56:11 +0000 (0:00:00.309) 0:00:47.722 ********** 2026-04-13 00:56:13.595235 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-13 00:56:13.595242 | orchestrator | 2026-04-13 00:56:13.595249 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:56:13.595257 | orchestrator | testbed-node-0 : ok=18  changed=10  unreachable=0 failed=1  skipped=12  rescued=0 ignored=0 2026-04-13 00:56:13.595266 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-13 00:56:13.595274 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-13 00:56:13.595281 | orchestrator | 2026-04-13 00:56:13.595288 | orchestrator | 2026-04-13 00:56:13.595296 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:56:13.595302 | orchestrator | Monday 13 April 2026 00:56:11 +0000 (0:00:00.850) 0:00:48.572 ********** 2026-04-13 00:56:13.595309 | orchestrator | =============================================================================== 2026-04-13 00:56:13.595315 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.25s 2026-04-13 00:56:13.595321 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 7.34s 2026-04-13 00:56:13.595328 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.35s 2026-04-13 00:56:13.595334 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.31s 2026-04-13 00:56:13.595340 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.81s 2026-04-13 00:56:13.595346 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.45s 2026-04-13 00:56:13.595356 | orchestrator | service-check-containers : keystone | Check containers ------------------ 2.39s 2026-04-13 00:56:13.595362 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.97s 2026-04-13 00:56:13.595369 | orchestrator | keystone : Generate the required cron jobs for the node ----------------- 1.64s 2026-04-13 00:56:13.595375 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.59s 2026-04-13 00:56:13.595382 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.22s 2026-04-13 00:56:13.595388 | orchestrator | keystone : Checking whether keystone-paste.ini file exists -------------- 1.07s 2026-04-13 00:56:13.595393 | orchestrator | keystone : Check if Keystone domain-specific config is supplied --------- 0.99s 2026-04-13 00:56:13.595400 | orchestrator | service-cert-copy : keystone | Copying over backend internal TLS key ---- 0.93s 2026-04-13 00:56:13.595407 | orchestrator | keystone : Creating keystone database ----------------------------------- 0.85s 2026-04-13 00:56:13.595457 | orchestrator | keystone : Create Keystone domain-specific config directory ------------- 0.79s 2026-04-13 00:56:13.595465 | orchestrator | keystone : include_tasks ------------------------------------------------ 0.71s 2026-04-13 00:56:13.595473 | orchestrator | keystone : include_tasks ------------------------------------------------ 0.68s 2026-04-13 00:56:13.595484 | orchestrator | service-cert-copy : keystone | Copying over backend internal TLS certificate --- 0.65s 2026-04-13 00:56:13.595490 | orchestrator | keystone : Copying over existing policy file ---------------------------- 0.56s 2026-04-13 00:56:13.595496 | orchestrator | 2026-04-13 00:56:13 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:56:13.595502 | orchestrator | 2026-04-13 00:56:13 | INFO  | Task 4bb030d1-45da-4816-8328-6235a2e8497a is in state STARTED 2026-04-13 00:56:13.595508 | orchestrator | 2026-04-13 00:56:13 | INFO  | Task 2b870e11-dff2-4b49-846e-86eaa2cfbbaa is in state STARTED 2026-04-13 00:56:13.595515 | orchestrator | 2026-04-13 00:56:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:16.629798 | orchestrator | 2026-04-13 00:56:16 | INFO  | Task d9bd637f-dc1b-41f3-942a-452cea8e1891 is in state STARTED 2026-04-13 00:56:16.630744 | orchestrator | 2026-04-13 00:56:16 | INFO  | Task c39a518c-0be6-44a5-b28a-0f65e162fcd2 is in state STARTED 2026-04-13 00:56:16.631079 | orchestrator | 2026-04-13 00:56:16 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:56:16.633368 | orchestrator | 2026-04-13 00:56:16 | INFO  | Task 4bb030d1-45da-4816-8328-6235a2e8497a is in state STARTED 2026-04-13 00:56:16.634062 | orchestrator | 2026-04-13 00:56:16 | INFO  | Task 2b870e11-dff2-4b49-846e-86eaa2cfbbaa is in state STARTED 2026-04-13 00:56:16.634089 | orchestrator | 2026-04-13 00:56:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:19.667251 | orchestrator | 2026-04-13 00:56:19 | INFO  | Task d9bd637f-dc1b-41f3-942a-452cea8e1891 is in state STARTED 2026-04-13 00:56:19.668905 | orchestrator | 2026-04-13 00:56:19 | INFO  | Task c39a518c-0be6-44a5-b28a-0f65e162fcd2 is in state STARTED 2026-04-13 00:56:19.670558 | orchestrator | 2026-04-13 00:56:19 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:56:19.672671 | orchestrator | 2026-04-13 00:56:19 | INFO  | Task 4bb030d1-45da-4816-8328-6235a2e8497a is in state STARTED 2026-04-13 00:56:19.674188 | orchestrator | 2026-04-13 00:56:19 | INFO  | Task 2b870e11-dff2-4b49-846e-86eaa2cfbbaa is in state STARTED 2026-04-13 00:56:19.674259 | orchestrator | 2026-04-13 00:56:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:22.722639 | orchestrator | 2026-04-13 00:56:22 | INFO  | Task d9bd637f-dc1b-41f3-942a-452cea8e1891 is in state STARTED 2026-04-13 00:56:22.723461 | orchestrator | 2026-04-13 00:56:22 | INFO  | Task c39a518c-0be6-44a5-b28a-0f65e162fcd2 is in state STARTED 2026-04-13 00:56:22.726248 | orchestrator | 2026-04-13 00:56:22 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:56:22.727371 | orchestrator | 2026-04-13 00:56:22 | INFO  | Task 4bb030d1-45da-4816-8328-6235a2e8497a is in state STARTED 2026-04-13 00:56:22.728438 | orchestrator | 2026-04-13 00:56:22 | INFO  | Task 2b870e11-dff2-4b49-846e-86eaa2cfbbaa is in state STARTED 2026-04-13 00:56:22.728483 | orchestrator | 2026-04-13 00:56:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:25.787621 | orchestrator | 2026-04-13 00:56:25 | INFO  | Task d9bd637f-dc1b-41f3-942a-452cea8e1891 is in state STARTED 2026-04-13 00:56:25.790514 | orchestrator | 2026-04-13 00:56:25 | INFO  | Task c39a518c-0be6-44a5-b28a-0f65e162fcd2 is in state STARTED 2026-04-13 00:56:25.792127 | orchestrator | 2026-04-13 00:56:25 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:56:25.794250 | orchestrator | 2026-04-13 00:56:25 | INFO  | Task 4bb030d1-45da-4816-8328-6235a2e8497a is in state STARTED 2026-04-13 00:56:25.795695 | orchestrator | 2026-04-13 00:56:25 | INFO  | Task 2b870e11-dff2-4b49-846e-86eaa2cfbbaa is in state STARTED 2026-04-13 00:56:25.795763 | orchestrator | 2026-04-13 00:56:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:28.857311 | orchestrator | 2026-04-13 00:56:28 | INFO  | Task d9bd637f-dc1b-41f3-942a-452cea8e1891 is in state STARTED 2026-04-13 00:56:28.859929 | orchestrator | 2026-04-13 00:56:28 | INFO  | Task c39a518c-0be6-44a5-b28a-0f65e162fcd2 is in state STARTED 2026-04-13 00:56:28.863180 | orchestrator | 2026-04-13 00:56:28 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:56:28.865294 | orchestrator | 2026-04-13 00:56:28 | INFO  | Task 4bb030d1-45da-4816-8328-6235a2e8497a is in state STARTED 2026-04-13 00:56:28.867708 | orchestrator | 2026-04-13 00:56:28 | INFO  | Task 2b870e11-dff2-4b49-846e-86eaa2cfbbaa is in state STARTED 2026-04-13 00:56:28.867757 | orchestrator | 2026-04-13 00:56:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:31.923394 | orchestrator | 2026-04-13 00:56:31 | INFO  | Task d9bd637f-dc1b-41f3-942a-452cea8e1891 is in state STARTED 2026-04-13 00:56:31.925747 | orchestrator | 2026-04-13 00:56:31 | INFO  | Task c39a518c-0be6-44a5-b28a-0f65e162fcd2 is in state STARTED 2026-04-13 00:56:31.928070 | orchestrator | 2026-04-13 00:56:31 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:56:31.930866 | orchestrator | 2026-04-13 00:56:31 | INFO  | Task 4bb030d1-45da-4816-8328-6235a2e8497a is in state STARTED 2026-04-13 00:56:31.933468 | orchestrator | 2026-04-13 00:56:31 | INFO  | Task 2b870e11-dff2-4b49-846e-86eaa2cfbbaa is in state STARTED 2026-04-13 00:56:31.933554 | orchestrator | 2026-04-13 00:56:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:34.985627 | orchestrator | 2026-04-13 00:56:34 | INFO  | Task d9bd637f-dc1b-41f3-942a-452cea8e1891 is in state STARTED 2026-04-13 00:56:34.988559 | orchestrator | 2026-04-13 00:56:34 | INFO  | Task c39a518c-0be6-44a5-b28a-0f65e162fcd2 is in state STARTED 2026-04-13 00:56:34.990481 | orchestrator | 2026-04-13 00:56:34 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:56:34.993116 | orchestrator | 2026-04-13 00:56:34 | INFO  | Task 4bb030d1-45da-4816-8328-6235a2e8497a is in state STARTED 2026-04-13 00:56:34.995505 | orchestrator | 2026-04-13 00:56:34 | INFO  | Task 2b870e11-dff2-4b49-846e-86eaa2cfbbaa is in state STARTED 2026-04-13 00:56:34.995563 | orchestrator | 2026-04-13 00:56:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:38.057296 | orchestrator | 2026-04-13 00:56:38 | INFO  | Task d9bd637f-dc1b-41f3-942a-452cea8e1891 is in state STARTED 2026-04-13 00:56:38.063113 | orchestrator | 2026-04-13 00:56:38 | INFO  | Task c39a518c-0be6-44a5-b28a-0f65e162fcd2 is in state STARTED 2026-04-13 00:56:38.065318 | orchestrator | 2026-04-13 00:56:38 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:56:38.068703 | orchestrator | 2026-04-13 00:56:38 | INFO  | Task 4bb030d1-45da-4816-8328-6235a2e8497a is in state STARTED 2026-04-13 00:56:38.070785 | orchestrator | 2026-04-13 00:56:38 | INFO  | Task 2b870e11-dff2-4b49-846e-86eaa2cfbbaa is in state STARTED 2026-04-13 00:56:38.070848 | orchestrator | 2026-04-13 00:56:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:41.122974 | orchestrator | 2026-04-13 00:56:41 | INFO  | Task d9bd637f-dc1b-41f3-942a-452cea8e1891 is in state STARTED 2026-04-13 00:56:41.123911 | orchestrator | 2026-04-13 00:56:41 | INFO  | Task c39a518c-0be6-44a5-b28a-0f65e162fcd2 is in state STARTED 2026-04-13 00:56:41.124964 | orchestrator | 2026-04-13 00:56:41 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:56:41.126161 | orchestrator | 2026-04-13 00:56:41 | INFO  | Task 4bb030d1-45da-4816-8328-6235a2e8497a is in state STARTED 2026-04-13 00:56:41.127089 | orchestrator | 2026-04-13 00:56:41 | INFO  | Task 2b870e11-dff2-4b49-846e-86eaa2cfbbaa is in state STARTED 2026-04-13 00:56:41.127116 | orchestrator | 2026-04-13 00:56:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:44.172857 | orchestrator | 2026-04-13 00:56:44 | INFO  | Task d9bd637f-dc1b-41f3-942a-452cea8e1891 is in state STARTED 2026-04-13 00:56:44.173370 | orchestrator | 2026-04-13 00:56:44 | INFO  | Task c39a518c-0be6-44a5-b28a-0f65e162fcd2 is in state STARTED 2026-04-13 00:56:44.175291 | orchestrator | 2026-04-13 00:56:44 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:56:44.177351 | orchestrator | 2026-04-13 00:56:44 | INFO  | Task 4bb030d1-45da-4816-8328-6235a2e8497a is in state STARTED 2026-04-13 00:56:44.179193 | orchestrator | 2026-04-13 00:56:44 | INFO  | Task 2b870e11-dff2-4b49-846e-86eaa2cfbbaa is in state STARTED 2026-04-13 00:56:44.179268 | orchestrator | 2026-04-13 00:56:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:47.231969 | orchestrator | 2026-04-13 00:56:47 | INFO  | Task d9bd637f-dc1b-41f3-942a-452cea8e1891 is in state STARTED 2026-04-13 00:56:47.234982 | orchestrator | 2026-04-13 00:56:47 | INFO  | Task c39a518c-0be6-44a5-b28a-0f65e162fcd2 is in state STARTED 2026-04-13 00:56:47.236599 | orchestrator | 2026-04-13 00:56:47 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:56:47.239274 | orchestrator | 2026-04-13 00:56:47 | INFO  | Task 4bb030d1-45da-4816-8328-6235a2e8497a is in state STARTED 2026-04-13 00:56:47.240216 | orchestrator | 2026-04-13 00:56:47 | INFO  | Task 2b870e11-dff2-4b49-846e-86eaa2cfbbaa is in state STARTED 2026-04-13 00:56:47.240230 | orchestrator | 2026-04-13 00:56:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:50.281883 | orchestrator | 2026-04-13 00:56:50 | INFO  | Task d9bd637f-dc1b-41f3-942a-452cea8e1891 is in state STARTED 2026-04-13 00:56:50.283601 | orchestrator | 2026-04-13 00:56:50 | INFO  | Task c39a518c-0be6-44a5-b28a-0f65e162fcd2 is in state STARTED 2026-04-13 00:56:50.286381 | orchestrator | 2026-04-13 00:56:50 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:56:50.289344 | orchestrator | 2026-04-13 00:56:50 | INFO  | Task 4bb030d1-45da-4816-8328-6235a2e8497a is in state STARTED 2026-04-13 00:56:50.290915 | orchestrator | 2026-04-13 00:56:50 | INFO  | Task 2b870e11-dff2-4b49-846e-86eaa2cfbbaa is in state STARTED 2026-04-13 00:56:50.290955 | orchestrator | 2026-04-13 00:56:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:53.330192 | orchestrator | 2026-04-13 00:56:53 | INFO  | Task d9bd637f-dc1b-41f3-942a-452cea8e1891 is in state STARTED 2026-04-13 00:56:53.331066 | orchestrator | 2026-04-13 00:56:53 | INFO  | Task c39a518c-0be6-44a5-b28a-0f65e162fcd2 is in state SUCCESS 2026-04-13 00:56:53.332514 | orchestrator | 2026-04-13 00:56:53 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:56:53.334171 | orchestrator | 2026-04-13 00:56:53 | INFO  | Task 4bb030d1-45da-4816-8328-6235a2e8497a is in state STARTED 2026-04-13 00:56:53.335821 | orchestrator | 2026-04-13 00:56:53 | INFO  | Task 2b870e11-dff2-4b49-846e-86eaa2cfbbaa is in state STARTED 2026-04-13 00:56:53.335867 | orchestrator | 2026-04-13 00:56:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:56.395088 | orchestrator | 2026-04-13 00:56:56 | INFO  | Task d9bd637f-dc1b-41f3-942a-452cea8e1891 is in state STARTED 2026-04-13 00:56:56.400437 | orchestrator | 2026-04-13 00:56:56 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:56:56.403707 | orchestrator | 2026-04-13 00:56:56 | INFO  | Task 8325a090-e236-42ac-aea3-6f010ec79b29 is in state STARTED 2026-04-13 00:56:56.405235 | orchestrator | 2026-04-13 00:56:56 | INFO  | Task 4bb030d1-45da-4816-8328-6235a2e8497a is in state STARTED 2026-04-13 00:56:56.406435 | orchestrator | 2026-04-13 00:56:56 | INFO  | Task 2b870e11-dff2-4b49-846e-86eaa2cfbbaa is in state STARTED 2026-04-13 00:56:56.406468 | orchestrator | 2026-04-13 00:56:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:56:59.458852 | orchestrator | 2026-04-13 00:56:59 | INFO  | Task d9bd637f-dc1b-41f3-942a-452cea8e1891 is in state STARTED 2026-04-13 00:56:59.462293 | orchestrator | 2026-04-13 00:56:59 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:56:59.465734 | orchestrator | 2026-04-13 00:56:59 | INFO  | Task 8325a090-e236-42ac-aea3-6f010ec79b29 is in state STARTED 2026-04-13 00:56:59.467286 | orchestrator | 2026-04-13 00:56:59 | INFO  | Task 4bb030d1-45da-4816-8328-6235a2e8497a is in state STARTED 2026-04-13 00:56:59.469559 | orchestrator | 2026-04-13 00:56:59 | INFO  | Task 2b870e11-dff2-4b49-846e-86eaa2cfbbaa is in state STARTED 2026-04-13 00:56:59.470121 | orchestrator | 2026-04-13 00:56:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:02.520155 | orchestrator | 2026-04-13 00:57:02 | INFO  | Task d9bd637f-dc1b-41f3-942a-452cea8e1891 is in state STARTED 2026-04-13 00:57:02.522921 | orchestrator | 2026-04-13 00:57:02 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:57:02.524531 | orchestrator | 2026-04-13 00:57:02 | INFO  | Task 8325a090-e236-42ac-aea3-6f010ec79b29 is in state STARTED 2026-04-13 00:57:02.526454 | orchestrator | 2026-04-13 00:57:02 | INFO  | Task 4bb030d1-45da-4816-8328-6235a2e8497a is in state STARTED 2026-04-13 00:57:02.527827 | orchestrator | 2026-04-13 00:57:02 | INFO  | Task 2b870e11-dff2-4b49-846e-86eaa2cfbbaa is in state STARTED 2026-04-13 00:57:02.527838 | orchestrator | 2026-04-13 00:57:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:05.569438 | orchestrator | 2026-04-13 00:57:05 | INFO  | Task d9bd637f-dc1b-41f3-942a-452cea8e1891 is in state STARTED 2026-04-13 00:57:05.571214 | orchestrator | 2026-04-13 00:57:05 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:57:05.573172 | orchestrator | 2026-04-13 00:57:05 | INFO  | Task 8325a090-e236-42ac-aea3-6f010ec79b29 is in state STARTED 2026-04-13 00:57:05.574917 | orchestrator | 2026-04-13 00:57:05 | INFO  | Task 4bb030d1-45da-4816-8328-6235a2e8497a is in state STARTED 2026-04-13 00:57:05.576752 | orchestrator | 2026-04-13 00:57:05 | INFO  | Task 2b870e11-dff2-4b49-846e-86eaa2cfbbaa is in state STARTED 2026-04-13 00:57:05.576819 | orchestrator | 2026-04-13 00:57:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:08.631853 | orchestrator | 2026-04-13 00:57:08 | INFO  | Task d9bd637f-dc1b-41f3-942a-452cea8e1891 is in state STARTED 2026-04-13 00:57:08.634421 | orchestrator | 2026-04-13 00:57:08 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:57:08.637039 | orchestrator | 2026-04-13 00:57:08 | INFO  | Task 8325a090-e236-42ac-aea3-6f010ec79b29 is in state STARTED 2026-04-13 00:57:08.639890 | orchestrator | 2026-04-13 00:57:08 | INFO  | Task 4bb030d1-45da-4816-8328-6235a2e8497a is in state STARTED 2026-04-13 00:57:08.642265 | orchestrator | 2026-04-13 00:57:08 | INFO  | Task 2b870e11-dff2-4b49-846e-86eaa2cfbbaa is in state STARTED 2026-04-13 00:57:08.643516 | orchestrator | 2026-04-13 00:57:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:11.687106 | orchestrator | 2026-04-13 00:57:11 | INFO  | Task d9bd637f-dc1b-41f3-942a-452cea8e1891 is in state STARTED 2026-04-13 00:57:11.688861 | orchestrator | 2026-04-13 00:57:11 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:57:11.690139 | orchestrator | 2026-04-13 00:57:11 | INFO  | Task 8325a090-e236-42ac-aea3-6f010ec79b29 is in state STARTED 2026-04-13 00:57:11.691684 | orchestrator | 2026-04-13 00:57:11 | INFO  | Task 4bb030d1-45da-4816-8328-6235a2e8497a is in state STARTED 2026-04-13 00:57:11.693003 | orchestrator | 2026-04-13 00:57:11 | INFO  | Task 2b870e11-dff2-4b49-846e-86eaa2cfbbaa is in state STARTED 2026-04-13 00:57:11.693117 | orchestrator | 2026-04-13 00:57:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:14.730358 | orchestrator | 2026-04-13 00:57:14 | INFO  | Task d9bd637f-dc1b-41f3-942a-452cea8e1891 is in state STARTED 2026-04-13 00:57:14.730516 | orchestrator | 2026-04-13 00:57:14 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:57:14.733221 | orchestrator | 2026-04-13 00:57:14 | INFO  | Task 8325a090-e236-42ac-aea3-6f010ec79b29 is in state STARTED 2026-04-13 00:57:14.734605 | orchestrator | 2026-04-13 00:57:14 | INFO  | Task 74dea42e-1c24-4c56-8397-2cf6aca7c4b7 is in state STARTED 2026-04-13 00:57:14.736154 | orchestrator | 2026-04-13 00:57:14 | INFO  | Task 4bb030d1-45da-4816-8328-6235a2e8497a is in state SUCCESS 2026-04-13 00:57:14.737405 | orchestrator | 2026-04-13 00:57:14 | INFO  | Task 2b870e11-dff2-4b49-846e-86eaa2cfbbaa is in state SUCCESS 2026-04-13 00:57:14.737927 | orchestrator | 2026-04-13 00:57:14.737962 | orchestrator | 2026-04-13 00:57:14.737974 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-04-13 00:57:14.737986 | orchestrator | 2026-04-13 00:57:14.737997 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-04-13 00:57:14.738007 | orchestrator | Monday 13 April 2026 00:56:16 +0000 (0:00:00.168) 0:00:00.168 ********** 2026-04-13 00:57:14.738078 | orchestrator | changed: [localhost] 2026-04-13 00:57:14.738094 | orchestrator | 2026-04-13 00:57:14.738104 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-04-13 00:57:14.738115 | orchestrator | Monday 13 April 2026 00:56:17 +0000 (0:00:01.729) 0:00:01.897 ********** 2026-04-13 00:57:14.738125 | orchestrator | changed: [localhost] 2026-04-13 00:57:14.738135 | orchestrator | 2026-04-13 00:57:14.738145 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-04-13 00:57:14.738155 | orchestrator | Monday 13 April 2026 00:56:46 +0000 (0:00:28.898) 0:00:30.796 ********** 2026-04-13 00:57:14.738165 | orchestrator | changed: [localhost] 2026-04-13 00:57:14.738174 | orchestrator | 2026-04-13 00:57:14.738185 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 00:57:14.738194 | orchestrator | 2026-04-13 00:57:14.738204 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 00:57:14.738229 | orchestrator | Monday 13 April 2026 00:56:51 +0000 (0:00:04.826) 0:00:35.622 ********** 2026-04-13 00:57:14.738240 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:14.738251 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:14.738261 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:14.738271 | orchestrator | 2026-04-13 00:57:14.738282 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 00:57:14.738293 | orchestrator | Monday 13 April 2026 00:56:51 +0000 (0:00:00.308) 0:00:35.931 ********** 2026-04-13 00:57:14.738303 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-04-13 00:57:14.738314 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-04-13 00:57:14.738324 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-04-13 00:57:14.738334 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-04-13 00:57:14.738369 | orchestrator | 2026-04-13 00:57:14.738415 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-04-13 00:57:14.738432 | orchestrator | skipping: no hosts matched 2026-04-13 00:57:14.738449 | orchestrator | 2026-04-13 00:57:14.738465 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:57:14.738482 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:57:14.738501 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:57:14.738520 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:57:14.738532 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:57:14.738542 | orchestrator | 2026-04-13 00:57:14.738554 | orchestrator | 2026-04-13 00:57:14.738566 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:57:14.738576 | orchestrator | Monday 13 April 2026 00:56:52 +0000 (0:00:00.484) 0:00:36.416 ********** 2026-04-13 00:57:14.738588 | orchestrator | =============================================================================== 2026-04-13 00:57:14.738599 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 28.90s 2026-04-13 00:57:14.738610 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.83s 2026-04-13 00:57:14.738622 | orchestrator | Ensure the destination directory exists --------------------------------- 1.73s 2026-04-13 00:57:14.738633 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.48s 2026-04-13 00:57:14.738644 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-04-13 00:57:14.738655 | orchestrator | 2026-04-13 00:57:14.738666 | orchestrator | 2026-04-13 00:57:14.738678 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 00:57:14.738688 | orchestrator | 2026-04-13 00:57:14.738699 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 00:57:14.738711 | orchestrator | Monday 13 April 2026 00:56:17 +0000 (0:00:00.430) 0:00:00.430 ********** 2026-04-13 00:57:14.738721 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:14.738732 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:14.738743 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:14.738754 | orchestrator | 2026-04-13 00:57:14.738765 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 00:57:14.738775 | orchestrator | Monday 13 April 2026 00:56:17 +0000 (0:00:00.420) 0:00:00.851 ********** 2026-04-13 00:57:14.738785 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-04-13 00:57:14.738796 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-04-13 00:57:14.738805 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-04-13 00:57:14.738815 | orchestrator | 2026-04-13 00:57:14.738825 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-04-13 00:57:14.738835 | orchestrator | 2026-04-13 00:57:14.738844 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-13 00:57:14.738854 | orchestrator | Monday 13 April 2026 00:56:18 +0000 (0:00:00.343) 0:00:01.194 ********** 2026-04-13 00:57:14.738864 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:57:14.738873 | orchestrator | 2026-04-13 00:57:14.738883 | orchestrator | TASK [service-ks-register : designate | Creating/deleting services] ************ 2026-04-13 00:57:14.738893 | orchestrator | Monday 13 April 2026 00:56:19 +0000 (0:00:00.981) 0:00:02.176 ********** 2026-04-13 00:57:14.738918 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (5 retries left). 2026-04-13 00:57:14.738938 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (4 retries left). 2026-04-13 00:57:14.738948 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (3 retries left). 2026-04-13 00:57:14.738957 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (2 retries left). 2026-04-13 00:57:14.738967 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (1 retries left). 2026-04-13 00:57:14.738985 | orchestrator | failed: [testbed-node-0] (item=designate (dns)) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Designate DNS Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9001"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9001"}], "name": "designate", "type": "dns"}, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-13 00:57:14.738997 | orchestrator | 2026-04-13 00:57:14.739007 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:57:14.739017 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-04-13 00:57:14.739027 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:57:14.739037 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:57:14.739046 | orchestrator | 2026-04-13 00:57:14.739056 | orchestrator | 2026-04-13 00:57:14.739066 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:57:14.739075 | orchestrator | Monday 13 April 2026 00:57:12 +0000 (0:00:53.568) 0:00:55.744 ********** 2026-04-13 00:57:14.739085 | orchestrator | =============================================================================== 2026-04-13 00:57:14.739094 | orchestrator | service-ks-register : designate | Creating/deleting services ----------- 53.57s 2026-04-13 00:57:14.739104 | orchestrator | designate : include_tasks ----------------------------------------------- 0.98s 2026-04-13 00:57:14.739114 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.42s 2026-04-13 00:57:14.739123 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.34s 2026-04-13 00:57:14.739133 | orchestrator | 2026-04-13 00:57:14.739142 | orchestrator | 2026-04-13 00:57:14.739152 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 00:57:14.739161 | orchestrator | 2026-04-13 00:57:14.739171 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 00:57:14.739181 | orchestrator | Monday 13 April 2026 00:56:16 +0000 (0:00:00.444) 0:00:00.444 ********** 2026-04-13 00:57:14.739190 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:14.739200 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:14.739209 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:14.739219 | orchestrator | 2026-04-13 00:57:14.739228 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 00:57:14.739238 | orchestrator | Monday 13 April 2026 00:56:16 +0000 (0:00:00.377) 0:00:00.822 ********** 2026-04-13 00:57:14.739248 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-04-13 00:57:14.739257 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-04-13 00:57:14.739267 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-04-13 00:57:14.739277 | orchestrator | 2026-04-13 00:57:14.739286 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-04-13 00:57:14.739296 | orchestrator | 2026-04-13 00:57:14.739305 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-13 00:57:14.739315 | orchestrator | Monday 13 April 2026 00:56:17 +0000 (0:00:00.625) 0:00:01.448 ********** 2026-04-13 00:57:14.739324 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:57:14.739340 | orchestrator | 2026-04-13 00:57:14.739351 | orchestrator | TASK [service-ks-register : barbican | Creating/deleting services] ************* 2026-04-13 00:57:14.739360 | orchestrator | Monday 13 April 2026 00:56:18 +0000 (0:00:00.951) 0:00:02.399 ********** 2026-04-13 00:57:14.739369 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (5 retries left). 2026-04-13 00:57:14.739425 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (4 retries left). 2026-04-13 00:57:14.739436 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (3 retries left). 2026-04-13 00:57:14.739445 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (2 retries left). 2026-04-13 00:57:14.739455 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (1 retries left). 2026-04-13 00:57:14.739473 | orchestrator | failed: [testbed-node-0] (item=barbican (key-manager)) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Barbican Key Management Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9311"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9311"}], "name": "barbican", "type": "key-manager"}, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-13 00:57:14.739486 | orchestrator | 2026-04-13 00:57:14.739496 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:57:14.739506 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-04-13 00:57:14.739516 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:57:14.739526 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:57:14.739536 | orchestrator | 2026-04-13 00:57:14.739546 | orchestrator | 2026-04-13 00:57:14.739555 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:57:14.739565 | orchestrator | Monday 13 April 2026 00:57:12 +0000 (0:00:53.729) 0:00:56.129 ********** 2026-04-13 00:57:14.739580 | orchestrator | =============================================================================== 2026-04-13 00:57:14.739590 | orchestrator | service-ks-register : barbican | Creating/deleting services ------------ 53.73s 2026-04-13 00:57:14.739600 | orchestrator | barbican : include_tasks ------------------------------------------------ 0.95s 2026-04-13 00:57:14.739609 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2026-04-13 00:57:14.739619 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.38s 2026-04-13 00:57:14.739629 | orchestrator | 2026-04-13 00:57:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:17.796064 | orchestrator | 2026-04-13 00:57:17 | INFO  | Task d9bd637f-dc1b-41f3-942a-452cea8e1891 is in state SUCCESS 2026-04-13 00:57:17.797937 | orchestrator | 2026-04-13 00:57:17 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:57:17.800934 | orchestrator | 2026-04-13 00:57:17 | INFO  | Task 8325a090-e236-42ac-aea3-6f010ec79b29 is in state STARTED 2026-04-13 00:57:17.803963 | orchestrator | 2026-04-13 00:57:17 | INFO  | Task 74dea42e-1c24-4c56-8397-2cf6aca7c4b7 is in state STARTED 2026-04-13 00:57:17.804022 | orchestrator | 2026-04-13 00:57:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:20.856433 | orchestrator | 2026-04-13 00:57:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:57:20.856922 | orchestrator | 2026-04-13 00:57:20 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:57:20.859485 | orchestrator | 2026-04-13 00:57:20 | INFO  | Task 8325a090-e236-42ac-aea3-6f010ec79b29 is in state STARTED 2026-04-13 00:57:20.860024 | orchestrator | 2026-04-13 00:57:20 | INFO  | Task 74dea42e-1c24-4c56-8397-2cf6aca7c4b7 is in state STARTED 2026-04-13 00:57:20.860054 | orchestrator | 2026-04-13 00:57:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:23.906301 | orchestrator | 2026-04-13 00:57:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:57:23.907225 | orchestrator | 2026-04-13 00:57:23 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:57:23.909570 | orchestrator | 2026-04-13 00:57:23 | INFO  | Task 8325a090-e236-42ac-aea3-6f010ec79b29 is in state STARTED 2026-04-13 00:57:23.911531 | orchestrator | 2026-04-13 00:57:23 | INFO  | Task 74dea42e-1c24-4c56-8397-2cf6aca7c4b7 is in state STARTED 2026-04-13 00:57:23.911901 | orchestrator | 2026-04-13 00:57:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:26.963417 | orchestrator | 2026-04-13 00:57:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:57:26.965273 | orchestrator | 2026-04-13 00:57:26 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:57:26.968310 | orchestrator | 2026-04-13 00:57:26 | INFO  | Task 8325a090-e236-42ac-aea3-6f010ec79b29 is in state STARTED 2026-04-13 00:57:26.970310 | orchestrator | 2026-04-13 00:57:26 | INFO  | Task 74dea42e-1c24-4c56-8397-2cf6aca7c4b7 is in state STARTED 2026-04-13 00:57:26.970333 | orchestrator | 2026-04-13 00:57:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:30.017917 | orchestrator | 2026-04-13 00:57:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:57:30.019847 | orchestrator | 2026-04-13 00:57:30 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:57:30.021484 | orchestrator | 2026-04-13 00:57:30 | INFO  | Task 8325a090-e236-42ac-aea3-6f010ec79b29 is in state STARTED 2026-04-13 00:57:30.023213 | orchestrator | 2026-04-13 00:57:30 | INFO  | Task 74dea42e-1c24-4c56-8397-2cf6aca7c4b7 is in state STARTED 2026-04-13 00:57:30.023459 | orchestrator | 2026-04-13 00:57:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:33.070955 | orchestrator | 2026-04-13 00:57:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:57:33.072481 | orchestrator | 2026-04-13 00:57:33 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:57:33.074349 | orchestrator | 2026-04-13 00:57:33 | INFO  | Task 8325a090-e236-42ac-aea3-6f010ec79b29 is in state STARTED 2026-04-13 00:57:33.075548 | orchestrator | 2026-04-13 00:57:33 | INFO  | Task 74dea42e-1c24-4c56-8397-2cf6aca7c4b7 is in state STARTED 2026-04-13 00:57:33.075593 | orchestrator | 2026-04-13 00:57:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:36.124457 | orchestrator | 2026-04-13 00:57:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:57:36.127709 | orchestrator | 2026-04-13 00:57:36 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:57:36.130269 | orchestrator | 2026-04-13 00:57:36 | INFO  | Task 8325a090-e236-42ac-aea3-6f010ec79b29 is in state STARTED 2026-04-13 00:57:36.131532 | orchestrator | 2026-04-13 00:57:36 | INFO  | Task 74dea42e-1c24-4c56-8397-2cf6aca7c4b7 is in state STARTED 2026-04-13 00:57:36.131575 | orchestrator | 2026-04-13 00:57:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:39.178579 | orchestrator | 2026-04-13 00:57:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:57:39.179857 | orchestrator | 2026-04-13 00:57:39 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:57:39.182650 | orchestrator | 2026-04-13 00:57:39 | INFO  | Task 8325a090-e236-42ac-aea3-6f010ec79b29 is in state STARTED 2026-04-13 00:57:39.185676 | orchestrator | 2026-04-13 00:57:39 | INFO  | Task 74dea42e-1c24-4c56-8397-2cf6aca7c4b7 is in state STARTED 2026-04-13 00:57:39.185701 | orchestrator | 2026-04-13 00:57:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:42.238921 | orchestrator | 2026-04-13 00:57:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:57:42.240063 | orchestrator | 2026-04-13 00:57:42 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:57:42.241876 | orchestrator | 2026-04-13 00:57:42 | INFO  | Task 8325a090-e236-42ac-aea3-6f010ec79b29 is in state STARTED 2026-04-13 00:57:42.243322 | orchestrator | 2026-04-13 00:57:42 | INFO  | Task 74dea42e-1c24-4c56-8397-2cf6aca7c4b7 is in state STARTED 2026-04-13 00:57:42.243423 | orchestrator | 2026-04-13 00:57:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:45.284313 | orchestrator | 2026-04-13 00:57:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:57:45.286633 | orchestrator | 2026-04-13 00:57:45 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:57:45.289813 | orchestrator | 2026-04-13 00:57:45 | INFO  | Task 8325a090-e236-42ac-aea3-6f010ec79b29 is in state STARTED 2026-04-13 00:57:45.292498 | orchestrator | 2026-04-13 00:57:45 | INFO  | Task 74dea42e-1c24-4c56-8397-2cf6aca7c4b7 is in state STARTED 2026-04-13 00:57:45.292804 | orchestrator | 2026-04-13 00:57:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:48.341090 | orchestrator | 2026-04-13 00:57:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:57:48.342682 | orchestrator | 2026-04-13 00:57:48 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:57:48.344798 | orchestrator | 2026-04-13 00:57:48 | INFO  | Task 8325a090-e236-42ac-aea3-6f010ec79b29 is in state STARTED 2026-04-13 00:57:48.348160 | orchestrator | 2026-04-13 00:57:48 | INFO  | Task 74dea42e-1c24-4c56-8397-2cf6aca7c4b7 is in state STARTED 2026-04-13 00:57:48.348188 | orchestrator | 2026-04-13 00:57:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:51.398987 | orchestrator | 2026-04-13 00:57:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:57:51.400212 | orchestrator | 2026-04-13 00:57:51 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:57:51.403072 | orchestrator | 2026-04-13 00:57:51 | INFO  | Task 8325a090-e236-42ac-aea3-6f010ec79b29 is in state SUCCESS 2026-04-13 00:57:51.404429 | orchestrator | 2026-04-13 00:57:51.404500 | orchestrator | 2026-04-13 00:57:51.404521 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 00:57:51.404540 | orchestrator | 2026-04-13 00:57:51.404556 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 00:57:51.404573 | orchestrator | Monday 13 April 2026 00:56:17 +0000 (0:00:00.405) 0:00:00.405 ********** 2026-04-13 00:57:51.404584 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:51.404595 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:51.404605 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:51.404614 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:51.404624 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:51.404633 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:51.404643 | orchestrator | 2026-04-13 00:57:51.404653 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 00:57:51.404691 | orchestrator | Monday 13 April 2026 00:56:18 +0000 (0:00:01.031) 0:00:01.436 ********** 2026-04-13 00:57:51.404702 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-04-13 00:57:51.404713 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-04-13 00:57:51.404725 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-04-13 00:57:51.404741 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-04-13 00:57:51.404757 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-04-13 00:57:51.404772 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-04-13 00:57:51.404789 | orchestrator | 2026-04-13 00:57:51.404807 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-04-13 00:57:51.404824 | orchestrator | 2026-04-13 00:57:51.404856 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-13 00:57:51.404867 | orchestrator | Monday 13 April 2026 00:56:18 +0000 (0:00:00.781) 0:00:02.217 ********** 2026-04-13 00:57:51.404878 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:57:51.404888 | orchestrator | 2026-04-13 00:57:51.404898 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-04-13 00:57:51.404907 | orchestrator | Monday 13 April 2026 00:56:19 +0000 (0:00:01.049) 0:00:03.267 ********** 2026-04-13 00:57:51.404917 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:51.404926 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:51.404936 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:51.404945 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:51.404954 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:51.404967 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:51.404983 | orchestrator | 2026-04-13 00:57:51.404999 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-04-13 00:57:51.405014 | orchestrator | Monday 13 April 2026 00:56:21 +0000 (0:00:01.389) 0:00:04.657 ********** 2026-04-13 00:57:51.405031 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:51.405048 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:51.405065 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:51.405081 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:57:51.405096 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:57:51.405113 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:57:51.405129 | orchestrator | 2026-04-13 00:57:51.405146 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-04-13 00:57:51.405158 | orchestrator | Monday 13 April 2026 00:56:22 +0000 (0:00:01.167) 0:00:05.825 ********** 2026-04-13 00:57:51.405169 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:51.405187 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:51.405466 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:51.405499 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:51.405517 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:51.405535 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:51.405552 | orchestrator | 2026-04-13 00:57:51.405567 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-04-13 00:57:51.405577 | orchestrator | Monday 13 April 2026 00:56:23 +0000 (0:00:00.610) 0:00:06.435 ********** 2026-04-13 00:57:51.405587 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:57:51.405597 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:57:51.405606 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:57:51.405616 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:57:51.405625 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:57:51.405635 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:57:51.405644 | orchestrator | 2026-04-13 00:57:51.405654 | orchestrator | TASK [service-ks-register : neutron | Creating/deleting services] ************** 2026-04-13 00:57:51.405664 | orchestrator | Monday 13 April 2026 00:56:23 +0000 (0:00:00.769) 0:00:07.204 ********** 2026-04-13 00:57:51.405692 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (5 retries left). 2026-04-13 00:57:51.405712 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (4 retries left). 2026-04-13 00:57:51.405729 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (3 retries left). 2026-04-13 00:57:51.405746 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (2 retries left). 2026-04-13 00:57:51.405762 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (1 retries left). 2026-04-13 00:57:51.405781 | orchestrator | failed: [testbed-node-0] (item=neutron (network)) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Openstack Networking", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9696"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9696"}], "name": "neutron", "type": "network"}, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-13 00:57:51.405797 | orchestrator | 2026-04-13 00:57:51.405808 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:57:51.405835 | orchestrator | testbed-node-0 : ok=5  changed=0 unreachable=0 failed=1  skipped=2  rescued=0 ignored=0 2026-04-13 00:57:51.405846 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:57:51.405856 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:57:51.405866 | orchestrator | testbed-node-3 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:57:51.405875 | orchestrator | testbed-node-4 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:57:51.405885 | orchestrator | testbed-node-5 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 00:57:51.405894 | orchestrator | 2026-04-13 00:57:51.405904 | orchestrator | 2026-04-13 00:57:51.405914 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:57:51.405923 | orchestrator | Monday 13 April 2026 00:57:17 +0000 (0:00:53.235) 0:01:00.440 ********** 2026-04-13 00:57:51.405942 | orchestrator | =============================================================================== 2026-04-13 00:57:51.405952 | orchestrator | service-ks-register : neutron | Creating/deleting services ------------- 53.24s 2026-04-13 00:57:51.405962 | orchestrator | neutron : Get container facts ------------------------------------------- 1.39s 2026-04-13 00:57:51.405973 | orchestrator | neutron : Get container volume facts ------------------------------------ 1.17s 2026-04-13 00:57:51.405984 | orchestrator | neutron : include_tasks ------------------------------------------------- 1.05s 2026-04-13 00:57:51.405995 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.03s 2026-04-13 00:57:51.406005 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.78s 2026-04-13 00:57:51.406050 | orchestrator | neutron : Check for ML2/OVS presence ------------------------------------ 0.77s 2026-04-13 00:57:51.406063 | orchestrator | neutron : Check for ML2/OVN presence ------------------------------------ 0.61s 2026-04-13 00:57:51.406074 | orchestrator | 2026-04-13 00:57:51.406085 | orchestrator | 2026-04-13 00:57:51.406097 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 00:57:51.406114 | orchestrator | 2026-04-13 00:57:51.406133 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 00:57:51.406150 | orchestrator | Monday 13 April 2026 00:56:56 +0000 (0:00:00.336) 0:00:00.336 ********** 2026-04-13 00:57:51.406167 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:57:51.406196 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:57:51.406213 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:57:51.406231 | orchestrator | 2026-04-13 00:57:51.406248 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 00:57:51.406264 | orchestrator | Monday 13 April 2026 00:56:56 +0000 (0:00:00.331) 0:00:00.667 ********** 2026-04-13 00:57:51.406281 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-04-13 00:57:51.406297 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-04-13 00:57:51.406314 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-04-13 00:57:51.406330 | orchestrator | 2026-04-13 00:57:51.406372 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-04-13 00:57:51.406389 | orchestrator | 2026-04-13 00:57:51.406405 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-13 00:57:51.406415 | orchestrator | Monday 13 April 2026 00:56:56 +0000 (0:00:00.322) 0:00:00.990 ********** 2026-04-13 00:57:51.406425 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:57:51.406435 | orchestrator | 2026-04-13 00:57:51.406444 | orchestrator | TASK [service-ks-register : placement | Creating/deleting services] ************ 2026-04-13 00:57:51.406454 | orchestrator | Monday 13 April 2026 00:56:57 +0000 (0:00:00.646) 0:00:01.637 ********** 2026-04-13 00:57:51.406463 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (5 retries left). 2026-04-13 00:57:51.406473 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (4 retries left). 2026-04-13 00:57:51.406482 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (3 retries left). 2026-04-13 00:57:51.406492 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (2 retries left). 2026-04-13 00:57:51.406502 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (1 retries left). 2026-04-13 00:57:51.406512 | orchestrator | failed: [testbed-node-0] (item=placement (placement)) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Placement Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:8780"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:8780"}], "name": "placement", "type": "placement"}, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-13 00:57:51.406522 | orchestrator | 2026-04-13 00:57:51.406532 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:57:51.406552 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-04-13 00:57:51.406562 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:57:51.406572 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 00:57:51.406582 | orchestrator | 2026-04-13 00:57:51.406591 | orchestrator | 2026-04-13 00:57:51.406601 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:57:51.406610 | orchestrator | Monday 13 April 2026 00:57:50 +0000 (0:00:53.458) 0:00:55.096 ********** 2026-04-13 00:57:51.406620 | orchestrator | =============================================================================== 2026-04-13 00:57:51.406629 | orchestrator | service-ks-register : placement | Creating/deleting services ----------- 53.46s 2026-04-13 00:57:51.406638 | orchestrator | placement : include_tasks ----------------------------------------------- 0.65s 2026-04-13 00:57:51.406648 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-04-13 00:57:51.406657 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.32s 2026-04-13 00:57:51.406667 | orchestrator | 2026-04-13 00:57:51 | INFO  | Task 74dea42e-1c24-4c56-8397-2cf6aca7c4b7 is in state STARTED 2026-04-13 00:57:51.406693 | orchestrator | 2026-04-13 00:57:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:54.446322 | orchestrator | 2026-04-13 00:57:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:57:54.448679 | orchestrator | 2026-04-13 00:57:54 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:57:54.450497 | orchestrator | 2026-04-13 00:57:54 | INFO  | Task 74dea42e-1c24-4c56-8397-2cf6aca7c4b7 is in state STARTED 2026-04-13 00:57:54.450530 | orchestrator | 2026-04-13 00:57:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:57:57.487929 | orchestrator | 2026-04-13 00:57:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:57:57.489107 | orchestrator | 2026-04-13 00:57:57 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:57:57.489306 | orchestrator | 2026-04-13 00:57:57 | INFO  | Task 74dea42e-1c24-4c56-8397-2cf6aca7c4b7 is in state STARTED 2026-04-13 00:57:57.489329 | orchestrator | 2026-04-13 00:57:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:00.535964 | orchestrator | 2026-04-13 00:58:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:58:00.536071 | orchestrator | 2026-04-13 00:58:00 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:58:00.536085 | orchestrator | 2026-04-13 00:58:00 | INFO  | Task 74dea42e-1c24-4c56-8397-2cf6aca7c4b7 is in state STARTED 2026-04-13 00:58:00.536763 | orchestrator | 2026-04-13 00:58:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:03.589194 | orchestrator | 2026-04-13 00:58:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:58:03.590204 | orchestrator | 2026-04-13 00:58:03 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:58:03.591566 | orchestrator | 2026-04-13 00:58:03 | INFO  | Task 74dea42e-1c24-4c56-8397-2cf6aca7c4b7 is in state STARTED 2026-04-13 00:58:03.591613 | orchestrator | 2026-04-13 00:58:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:06.639214 | orchestrator | 2026-04-13 00:58:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:58:06.643660 | orchestrator | 2026-04-13 00:58:06 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state STARTED 2026-04-13 00:58:06.645531 | orchestrator | 2026-04-13 00:58:06 | INFO  | Task 74dea42e-1c24-4c56-8397-2cf6aca7c4b7 is in state STARTED 2026-04-13 00:58:06.646396 | orchestrator | 2026-04-13 00:58:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:09.690185 | orchestrator | 2026-04-13 00:58:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:58:09.697133 | orchestrator | 2026-04-13 00:58:09 | INFO  | Task 8fc6444f-2561-41fc-bf1b-28147ad598ae is in state SUCCESS 2026-04-13 00:58:09.699811 | orchestrator | 2026-04-13 00:58:09.699869 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-13 00:58:09.699882 | orchestrator | 2.16.14 2026-04-13 00:58:09.699894 | orchestrator | 2026-04-13 00:58:09.699905 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-04-13 00:58:09.699915 | orchestrator | 2026-04-13 00:58:09.699925 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-13 00:58:09.699935 | orchestrator | Monday 13 April 2026 00:46:23 +0000 (0:00:00.569) 0:00:00.569 ********** 2026-04-13 00:58:09.699946 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:58:09.699983 | orchestrator | 2026-04-13 00:58:09.699993 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-13 00:58:09.700003 | orchestrator | Monday 13 April 2026 00:46:24 +0000 (0:00:01.005) 0:00:01.575 ********** 2026-04-13 00:58:09.700013 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.700023 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.700032 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.700041 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.700051 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.700060 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.700070 | orchestrator | 2026-04-13 00:58:09.700079 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-13 00:58:09.700089 | orchestrator | Monday 13 April 2026 00:46:27 +0000 (0:00:02.302) 0:00:03.877 ********** 2026-04-13 00:58:09.700098 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.700107 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.700117 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.700126 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.700136 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.700145 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.700155 | orchestrator | 2026-04-13 00:58:09.700165 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-13 00:58:09.700175 | orchestrator | Monday 13 April 2026 00:46:27 +0000 (0:00:00.726) 0:00:04.604 ********** 2026-04-13 00:58:09.700184 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.700194 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.700203 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.700227 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.700237 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.700246 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.700256 | orchestrator | 2026-04-13 00:58:09.700265 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-13 00:58:09.700275 | orchestrator | Monday 13 April 2026 00:46:28 +0000 (0:00:00.972) 0:00:05.576 ********** 2026-04-13 00:58:09.700284 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.700293 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.700303 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.700312 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.700321 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.700331 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.700414 | orchestrator | 2026-04-13 00:58:09.700427 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-13 00:58:09.700438 | orchestrator | Monday 13 April 2026 00:46:29 +0000 (0:00:00.813) 0:00:06.390 ********** 2026-04-13 00:58:09.700449 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.700460 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.700470 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.700480 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.700489 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.700498 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.700508 | orchestrator | 2026-04-13 00:58:09.700518 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-13 00:58:09.700527 | orchestrator | Monday 13 April 2026 00:46:30 +0000 (0:00:00.813) 0:00:07.203 ********** 2026-04-13 00:58:09.700537 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.700546 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.700555 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.700568 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.700585 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.701118 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.701132 | orchestrator | 2026-04-13 00:58:09.701142 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-13 00:58:09.701152 | orchestrator | Monday 13 April 2026 00:46:31 +0000 (0:00:00.930) 0:00:08.134 ********** 2026-04-13 00:58:09.701162 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.701172 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.701195 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.701204 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.701214 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.701223 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.701233 | orchestrator | 2026-04-13 00:58:09.701243 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-13 00:58:09.701252 | orchestrator | Monday 13 April 2026 00:46:32 +0000 (0:00:00.898) 0:00:09.033 ********** 2026-04-13 00:58:09.701262 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.701271 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.701281 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.701290 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.701299 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.701309 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.701318 | orchestrator | 2026-04-13 00:58:09.701328 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-13 00:58:09.701368 | orchestrator | Monday 13 April 2026 00:46:33 +0000 (0:00:01.090) 0:00:10.123 ********** 2026-04-13 00:58:09.701392 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-13 00:58:09.701409 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-13 00:58:09.701425 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-13 00:58:09.701441 | orchestrator | 2026-04-13 00:58:09.701456 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-13 00:58:09.701470 | orchestrator | Monday 13 April 2026 00:46:34 +0000 (0:00:00.705) 0:00:10.829 ********** 2026-04-13 00:58:09.701486 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.701502 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.701518 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.701552 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.701569 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.701586 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.701601 | orchestrator | 2026-04-13 00:58:09.701618 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-13 00:58:09.701635 | orchestrator | Monday 13 April 2026 00:46:35 +0000 (0:00:00.982) 0:00:11.812 ********** 2026-04-13 00:58:09.701652 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-13 00:58:09.701665 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-13 00:58:09.701675 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-13 00:58:09.701685 | orchestrator | 2026-04-13 00:58:09.701696 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-13 00:58:09.701707 | orchestrator | Monday 13 April 2026 00:46:37 +0000 (0:00:02.600) 0:00:14.412 ********** 2026-04-13 00:58:09.701718 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-13 00:58:09.701730 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-13 00:58:09.701741 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-13 00:58:09.701752 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.701763 | orchestrator | 2026-04-13 00:58:09.701774 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-13 00:58:09.701785 | orchestrator | Monday 13 April 2026 00:46:38 +0000 (0:00:01.197) 0:00:15.610 ********** 2026-04-13 00:58:09.701799 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.701823 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.701845 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.701857 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.701868 | orchestrator | 2026-04-13 00:58:09.701880 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-13 00:58:09.701891 | orchestrator | Monday 13 April 2026 00:46:39 +0000 (0:00:00.976) 0:00:16.586 ********** 2026-04-13 00:58:09.701904 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.701919 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.701931 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.701942 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.701953 | orchestrator | 2026-04-13 00:58:09.701965 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-13 00:58:09.701979 | orchestrator | Monday 13 April 2026 00:46:40 +0000 (0:00:00.329) 0:00:16.915 ********** 2026-04-13 00:58:09.702065 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-13 00:46:35.884809', 'end': '2026-04-13 00:46:36.004020', 'delta': '0:00:00.119211', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.702094 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-13 00:46:36.689710', 'end': '2026-04-13 00:46:36.798425', 'delta': '0:00:00.108715', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.702121 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-13 00:46:37.410504', 'end': '2026-04-13 00:46:37.495888', 'delta': '0:00:00.085384', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.702151 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.702166 | orchestrator | 2026-04-13 00:58:09.702178 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-13 00:58:09.702195 | orchestrator | Monday 13 April 2026 00:46:40 +0000 (0:00:00.501) 0:00:17.417 ********** 2026-04-13 00:58:09.702210 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.702223 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.702241 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.702251 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.702261 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.702270 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.702280 | orchestrator | 2026-04-13 00:58:09.702289 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-13 00:58:09.702299 | orchestrator | Monday 13 April 2026 00:46:42 +0000 (0:00:01.331) 0:00:18.749 ********** 2026-04-13 00:58:09.702309 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-13 00:58:09.702323 | orchestrator | 2026-04-13 00:58:09.704773 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-13 00:58:09.704798 | orchestrator | Monday 13 April 2026 00:46:43 +0000 (0:00:01.341) 0:00:20.090 ********** 2026-04-13 00:58:09.704808 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.704818 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.704828 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.704837 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.704847 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.704856 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.704866 | orchestrator | 2026-04-13 00:58:09.704876 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-13 00:58:09.704885 | orchestrator | Monday 13 April 2026 00:46:45 +0000 (0:00:01.880) 0:00:21.970 ********** 2026-04-13 00:58:09.704895 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.704904 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.704914 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.704923 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.704932 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.704942 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.704951 | orchestrator | 2026-04-13 00:58:09.704961 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-13 00:58:09.704970 | orchestrator | Monday 13 April 2026 00:46:47 +0000 (0:00:01.905) 0:00:23.876 ********** 2026-04-13 00:58:09.704980 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.704989 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.704999 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.705008 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.705018 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.705027 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.705037 | orchestrator | 2026-04-13 00:58:09.705046 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-13 00:58:09.705056 | orchestrator | Monday 13 April 2026 00:46:48 +0000 (0:00:01.170) 0:00:25.047 ********** 2026-04-13 00:58:09.705065 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.705075 | orchestrator | 2026-04-13 00:58:09.705084 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-13 00:58:09.705094 | orchestrator | Monday 13 April 2026 00:46:48 +0000 (0:00:00.124) 0:00:25.172 ********** 2026-04-13 00:58:09.705103 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.705138 | orchestrator | 2026-04-13 00:58:09.705148 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-13 00:58:09.705158 | orchestrator | Monday 13 April 2026 00:46:48 +0000 (0:00:00.168) 0:00:25.340 ********** 2026-04-13 00:58:09.705167 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.705177 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.705186 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.705273 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.705288 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.705297 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.705307 | orchestrator | 2026-04-13 00:58:09.705316 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-13 00:58:09.705326 | orchestrator | Monday 13 April 2026 00:46:49 +0000 (0:00:00.890) 0:00:26.231 ********** 2026-04-13 00:58:09.705352 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.705363 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.705372 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.705382 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.705391 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.705401 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.705411 | orchestrator | 2026-04-13 00:58:09.705420 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-13 00:58:09.705430 | orchestrator | Monday 13 April 2026 00:46:50 +0000 (0:00:01.116) 0:00:27.348 ********** 2026-04-13 00:58:09.705439 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.705449 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.705458 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.705468 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.705477 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.705486 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.705496 | orchestrator | 2026-04-13 00:58:09.705506 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-13 00:58:09.705515 | orchestrator | Monday 13 April 2026 00:46:51 +0000 (0:00:00.825) 0:00:28.174 ********** 2026-04-13 00:58:09.705525 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.705534 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.705544 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.705553 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.705562 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.705572 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.705582 | orchestrator | 2026-04-13 00:58:09.705591 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-13 00:58:09.705601 | orchestrator | Monday 13 April 2026 00:46:52 +0000 (0:00:01.077) 0:00:29.251 ********** 2026-04-13 00:58:09.705610 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.705620 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.705637 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.705647 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.705656 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.705687 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.705697 | orchestrator | 2026-04-13 00:58:09.705707 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-13 00:58:09.705717 | orchestrator | Monday 13 April 2026 00:46:53 +0000 (0:00:01.054) 0:00:30.306 ********** 2026-04-13 00:58:09.705726 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.705735 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.705745 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.705754 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.705764 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.705773 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.705783 | orchestrator | 2026-04-13 00:58:09.705792 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-13 00:58:09.705811 | orchestrator | Monday 13 April 2026 00:46:54 +0000 (0:00:01.217) 0:00:31.524 ********** 2026-04-13 00:58:09.705820 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.705830 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.705912 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.705930 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.705946 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.705961 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.705976 | orchestrator | 2026-04-13 00:58:09.705992 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-13 00:58:09.706008 | orchestrator | Monday 13 April 2026 00:46:55 +0000 (0:00:01.023) 0:00:32.547 ********** 2026-04-13 00:58:09.706086 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9b6aa2f8--de46--5cb6--b1a4--58b08f65cf0a-osd--block--9b6aa2f8--de46--5cb6--b1a4--58b08f65cf0a', 'dm-uuid-LVM-cSl6EFC0vACy8fJ7BlSqjPds1pJgcxwLfXOzNyamFGRTM2VeT5WrO4p2XomDG9q7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.706101 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--100799fe--f0b8--5d68--80c9--d39d0aace7f9-osd--block--100799fe--f0b8--5d68--80c9--d39d0aace7f9', 'dm-uuid-LVM-IqJ6f8a9dcmLdR12gJUOXnHw7clvOZWm9FD367r6iAkFJPcS6r2dVm1z76pgeY48'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.706220 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.706246 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.706261 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.706277 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.706301 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.706393 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.706418 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.706435 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.706644 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5', 'scsi-SQEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5-part1', 'scsi-SQEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5-part14', 'scsi-SQEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5-part15', 'scsi-SQEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5-part16', 'scsi-SQEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:58:09.706686 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--9b6aa2f8--de46--5cb6--b1a4--58b08f65cf0a-osd--block--9b6aa2f8--de46--5cb6--b1a4--58b08f65cf0a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1IV4PH-Qc9i-ENDW-Z9pI-tJih-3vlb-22if96', 'scsi-0QEMU_QEMU_HARDDISK_70b2b286-75d2-4918-b809-b0d3c77d8089', 'scsi-SQEMU_QEMU_HARDDISK_70b2b286-75d2-4918-b809-b0d3c77d8089'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:58:09.706728 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d9f8332f--65b5--5ad5--8d64--0b4e5e7cc000-osd--block--d9f8332f--65b5--5ad5--8d64--0b4e5e7cc000', 'dm-uuid-LVM-1txSuAOOptD8I4h4eKjXc96vtE7f6jWbC9BOAp4vhlWbCNsWn0IEIKhOWruHyV8G'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.706746 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--100799fe--f0b8--5d68--80c9--d39d0aace7f9-osd--block--100799fe--f0b8--5d68--80c9--d39d0aace7f9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-n4mNIH-HvKZ-CQXZ-jvFD-Dgf1-ia3W-N6c03E', 'scsi-0QEMU_QEMU_HARDDISK_e58cc4cd-c100-42fd-a854-9a07c2c5ceb1', 'scsi-SQEMU_QEMU_HARDDISK_e58cc4cd-c100-42fd-a854-9a07c2c5ceb1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:58:09.706763 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7331b6c9--9d3b--5dac--8499--53ee0940f196-osd--block--7331b6c9--9d3b--5dac--8499--53ee0940f196', 'dm-uuid-LVM-SQiielPvNiJjT4l9ezQgzn3ldkRcoUJzGPQdxfMc0JVwjrask2CEmaj4gQR7EVtA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.706845 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ff476bc-ae0b-4cfd-96fa-c57a101f59cb', 'scsi-SQEMU_QEMU_HARDDISK_1ff476bc-ae0b-4cfd-96fa-c57a101f59cb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:58:09.706859 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.706868 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-13-00-02-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:58:09.706889 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--586ba51f--dba7--5dcd--8710--1804179cab86-osd--block--586ba51f--dba7--5dcd--8710--1804179cab86', 'dm-uuid-LVM-8caEtY6MBEn2RdyAHnKISh0sKzPpSLh1PICaFdkYsf0qkm2dV0jOvEPAX71wUGht'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.706898 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.706906 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.706914 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.706923 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--971aa970--5a40--5da7--9620--8f2c789358d2-osd--block--971aa970--5a40--5da7--9620--8f2c789358d2', 'dm-uuid-LVM-aFwIWAYFs8WYeXQaKcSMhdbdGZ2QSYf8M9Wn37p1lnEvd08xMlzmh3CEsSCBXnLt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.706988 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.707006 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.707021 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.707043 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.707064 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.707104 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.707119 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.707134 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.707236 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b', 'scsi-SQEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b-part1', 'scsi-SQEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b-part14', 'scsi-SQEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b-part15', 'scsi-SQEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b-part16', 'scsi-SQEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:58:09.707265 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d9f8332f--65b5--5ad5--8d64--0b4e5e7cc000-osd--block--d9f8332f--65b5--5ad5--8d64--0b4e5e7cc000'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KOrNQr-6C6l-bOv2-PHf1-dI8W-RMyy-ZtFrf4', 'scsi-0QEMU_QEMU_HARDDISK_5e205b26-74df-4a0d-a6b0-fd65d84e1df5', 'scsi-SQEMU_QEMU_HARDDISK_5e205b26-74df-4a0d-a6b0-fd65d84e1df5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:58:09.707285 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.707294 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--7331b6c9--9d3b--5dac--8499--53ee0940f196-osd--block--7331b6c9--9d3b--5dac--8499--53ee0940f196'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rVbr9f-Be4n-dgvH-W7EQ-qBne-SxNz-hN6c4z', 'scsi-0QEMU_QEMU_HARDDISK_3fbef31d-44a1-4ae9-9145-86033c094687', 'scsi-SQEMU_QEMU_HARDDISK_3fbef31d-44a1-4ae9-9145-86033c094687'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:58:09.707302 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.707310 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.707429 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d506fd3a-4f98-4a08-a2bf-c3638f88932b', 'scsi-SQEMU_QEMU_HARDDISK_d506fd3a-4f98-4a08-a2bf-c3638f88932b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:58:09.707442 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.707471 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-13-00-02-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:58:09.707492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.707501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.707552 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191', 'scsi-SQEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191-part1', 'scsi-SQEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191-part14', 'scsi-SQEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191-part15', 'scsi-SQEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191-part16', 'scsi-SQEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:58:09.707564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.707572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.707593 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--586ba51f--dba7--5dcd--8710--1804179cab86-osd--block--586ba51f--dba7--5dcd--8710--1804179cab86'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fnwwdP-vgp1-BVac-ze3F-QwX8-FUj4-mA0ico', 'scsi-0QEMU_QEMU_HARDDISK_28faf471-35fc-493f-ba87-763b98edc4d7', 'scsi-SQEMU_QEMU_HARDDISK_28faf471-35fc-493f-ba87-763b98edc4d7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:58:09.707602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.707610 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--971aa970--5a40--5da7--9620--8f2c789358d2-osd--block--971aa970--5a40--5da7--9620--8f2c789358d2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wjwTDS-mZPP-mWMN-37Tp-AE7E-Rqg4-v0jeB5', 'scsi-0QEMU_QEMU_HARDDISK_2d6b0ac7-37bd-44a3-98bf-24bee37418a9', 'scsi-SQEMU_QEMU_HARDDISK_2d6b0ac7-37bd-44a3-98bf-24bee37418a9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:58:09.707619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.707627 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40b67a78-e903-4b7b-9416-2311a13eed69', 'scsi-SQEMU_QEMU_HARDDISK_40b67a78-e903-4b7b-9416-2311a13eed69'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:58:09.707636 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.707698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.707710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.707725 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-13-00-03-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:58:09.707745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.707759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea7381b-ad1e-42b1-98ca-267bdb7db168', 'scsi-SQEMU_QEMU_HARDDISK_dea7381b-ad1e-42b1-98ca-267bdb7db168'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea7381b-ad1e-42b1-98ca-267bdb7db168-part1', 'scsi-SQEMU_QEMU_HARDDISK_dea7381b-ad1e-42b1-98ca-267bdb7db168-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea7381b-ad1e-42b1-98ca-267bdb7db168-part14', 'scsi-SQEMU_QEMU_HARDDISK_dea7381b-ad1e-42b1-98ca-267bdb7db168-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea7381b-ad1e-42b1-98ca-267bdb7db168-part15', 'scsi-SQEMU_QEMU_HARDDISK_dea7381b-ad1e-42b1-98ca-267bdb7db168-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea7381b-ad1e-42b1-98ca-267bdb7db168-part16', 'scsi-SQEMU_QEMU_HARDDISK_dea7381b-ad1e-42b1-98ca-267bdb7db168-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:58:09.707841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-13-00-02-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:58:09.707861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.707868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.707879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.707886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.707893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.707899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.707906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.707913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.707975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97e6821d-280a-4d97-99b4-3ef3a3e75d06', 'scsi-SQEMU_QEMU_HARDDISK_97e6821d-280a-4d97-99b4-3ef3a3e75d06'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97e6821d-280a-4d97-99b4-3ef3a3e75d06-part1', 'scsi-SQEMU_QEMU_HARDDISK_97e6821d-280a-4d97-99b4-3ef3a3e75d06-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97e6821d-280a-4d97-99b4-3ef3a3e75d06-part14', 'scsi-SQEMU_QEMU_HARDDISK_97e6821d-280a-4d97-99b4-3ef3a3e75d06-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97e6821d-280a-4d97-99b4-3ef3a3e75d06-part15', 'scsi-SQEMU_QEMU_HARDDISK_97e6821d-280a-4d97-99b4-3ef3a3e75d06-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97e6821d-280a-4d97-99b4-3ef3a3e75d06-part16', 'scsi-SQEMU_QEMU_HARDDISK_97e6821d-280a-4d97-99b4-3ef3a3e75d06-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:58:09.707993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-13-00-02-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:58:09.708000 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.708023 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.708031 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.708037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.708045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.708051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.708058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.708128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.708139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.708146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.708157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 00:58:09.708165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdc8ba01-4f9f-45f9-bedc-50cd21a5940b', 'scsi-SQEMU_QEMU_HARDDISK_cdc8ba01-4f9f-45f9-bedc-50cd21a5940b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdc8ba01-4f9f-45f9-bedc-50cd21a5940b-part1', 'scsi-SQEMU_QEMU_HARDDISK_cdc8ba01-4f9f-45f9-bedc-50cd21a5940b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdc8ba01-4f9f-45f9-bedc-50cd21a5940b-part14', 'scsi-SQEMU_QEMU_HARDDISK_cdc8ba01-4f9f-45f9-bedc-50cd21a5940b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdc8ba01-4f9f-45f9-bedc-50cd21a5940b-part15', 'scsi-SQEMU_QEMU_HARDDISK_cdc8ba01-4f9f-45f9-bedc-50cd21a5940b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdc8ba01-4f9f-45f9-bedc-50cd21a5940b-part16', 'scsi-SQEMU_QEMU_HARDDISK_cdc8ba01-4f9f-45f9-bedc-50cd21a5940b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:58:09.708221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-13-00-03-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 00:58:09.708232 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.708239 | orchestrator | 2026-04-13 00:58:09.708246 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-13 00:58:09.708254 | orchestrator | Monday 13 April 2026 00:46:57 +0000 (0:00:01.565) 0:00:34.113 ********** 2026-04-13 00:58:09.708263 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9b6aa2f8--de46--5cb6--b1a4--58b08f65cf0a-osd--block--9b6aa2f8--de46--5cb6--b1a4--58b08f65cf0a', 'dm-uuid-LVM-cSl6EFC0vACy8fJ7BlSqjPds1pJgcxwLfXOzNyamFGRTM2VeT5WrO4p2XomDG9q7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708275 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--100799fe--f0b8--5d68--80c9--d39d0aace7f9-osd--block--100799fe--f0b8--5d68--80c9--d39d0aace7f9', 'dm-uuid-LVM-IqJ6f8a9dcmLdR12gJUOXnHw7clvOZWm9FD367r6iAkFJPcS6r2dVm1z76pgeY48'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708283 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708291 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708308 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708385 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708396 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--586ba51f--dba7--5dcd--8710--1804179cab86-osd--block--586ba51f--dba7--5dcd--8710--1804179cab86', 'dm-uuid-LVM-8caEtY6MBEn2RdyAHnKISh0sKzPpSLh1PICaFdkYsf0qkm2dV0jOvEPAX71wUGht'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708407 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708415 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--971aa970--5a40--5da7--9620--8f2c789358d2-osd--block--971aa970--5a40--5da7--9620--8f2c789358d2', 'dm-uuid-LVM-aFwIWAYFs8WYeXQaKcSMhdbdGZ2QSYf8M9Wn37p1lnEvd08xMlzmh3CEsSCBXnLt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708422 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708429 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708483 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708493 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708500 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708511 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708518 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708525 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708536 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708596 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708612 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5', 'scsi-SQEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5-part1', 'scsi-SQEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5-part14', 'scsi-SQEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5-part15', 'scsi-SQEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5-part16', 'scsi-SQEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708620 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708679 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191', 'scsi-SQEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191-part1', 'scsi-SQEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191-part14', 'scsi-SQEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191-part15', 'scsi-SQEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191-part16', 'scsi-SQEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708691 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--586ba51f--dba7--5dcd--8710--1804179cab86-osd--block--586ba51f--dba7--5dcd--8710--1804179cab86'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fnwwdP-vgp1-BVac-ze3F-QwX8-FUj4-mA0ico', 'scsi-0QEMU_QEMU_HARDDISK_28faf471-35fc-493f-ba87-763b98edc4d7', 'scsi-SQEMU_QEMU_HARDDISK_28faf471-35fc-493f-ba87-763b98edc4d7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708699 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--9b6aa2f8--de46--5cb6--b1a4--58b08f65cf0a-osd--block--9b6aa2f8--de46--5cb6--b1a4--58b08f65cf0a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1IV4PH-Qc9i-ENDW-Z9pI-tJih-3vlb-22if96', 'scsi-0QEMU_QEMU_HARDDISK_70b2b286-75d2-4918-b809-b0d3c77d8089', 'scsi-SQEMU_QEMU_HARDDISK_70b2b286-75d2-4918-b809-b0d3c77d8089'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708753 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--971aa970--5a40--5da7--9620--8f2c789358d2-osd--block--971aa970--5a40--5da7--9620--8f2c789358d2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wjwTDS-mZPP-mWMN-37Tp-AE7E-Rqg4-v0jeB5', 'scsi-0QEMU_QEMU_HARDDISK_2d6b0ac7-37bd-44a3-98bf-24bee37418a9', 'scsi-SQEMU_QEMU_HARDDISK_2d6b0ac7-37bd-44a3-98bf-24bee37418a9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708764 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--100799fe--f0b8--5d68--80c9--d39d0aace7f9-osd--block--100799fe--f0b8--5d68--80c9--d39d0aace7f9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-n4mNIH-HvKZ-CQXZ-jvFD-Dgf1-ia3W-N6c03E', 'scsi-0QEMU_QEMU_HARDDISK_e58cc4cd-c100-42fd-a854-9a07c2c5ceb1', 'scsi-SQEMU_QEMU_HARDDISK_e58cc4cd-c100-42fd-a854-9a07c2c5ceb1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708777 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40b67a78-e903-4b7b-9416-2311a13eed69', 'scsi-SQEMU_QEMU_HARDDISK_40b67a78-e903-4b7b-9416-2311a13eed69'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708784 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d9f8332f--65b5--5ad5--8d64--0b4e5e7cc000-osd--block--d9f8332f--65b5--5ad5--8d64--0b4e5e7cc000', 'dm-uuid-LVM-1txSuAOOptD8I4h4eKjXc96vtE7f6jWbC9BOAp4vhlWbCNsWn0IEIKhOWruHyV8G'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708797 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ff476bc-ae0b-4cfd-96fa-c57a101f59cb', 'scsi-SQEMU_QEMU_HARDDISK_1ff476bc-ae0b-4cfd-96fa-c57a101f59cb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708855 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7331b6c9--9d3b--5dac--8499--53ee0940f196-osd--block--7331b6c9--9d3b--5dac--8499--53ee0940f196', 'dm-uuid-LVM-SQiielPvNiJjT4l9ezQgzn3ldkRcoUJzGPQdxfMc0JVwjrask2CEmaj4gQR7EVtA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708866 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-13-00-02-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708877 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-13-00-03-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708885 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708897 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708904 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.708911 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708960 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708970 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708986 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.708994 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.709001 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.709013 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.709020 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.709071 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.709081 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.709117 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.709125 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.709138 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.709205 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b', 'scsi-SQEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b-part1', 'scsi-SQEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b-part14', 'scsi-SQEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b-part15', 'scsi-SQEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b-part16', 'scsi-SQEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.709221 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.709228 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d9f8332f--65b5--5ad5--8d64--0b4e5e7cc000-osd--block--d9f8332f--65b5--5ad5--8d64--0b4e5e7cc000'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KOrNQr-6C6l-bOv2-PHf1-dI8W-RMyy-ZtFrf4', 'scsi-0QEMU_QEMU_HARDDISK_5e205b26-74df-4a0d-a6b0-fd65d84e1df5', 'scsi-SQEMU_QEMU_HARDDISK_5e205b26-74df-4a0d-a6b0-fd65d84e1df5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.709241 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.709292 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea7381b-ad1e-42b1-98ca-267bdb7db168', 'scsi-SQEMU_QEMU_HARDDISK_dea7381b-ad1e-42b1-98ca-267bdb7db168'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea7381b-ad1e-42b1-98ca-267bdb7db168-part1', 'scsi-SQEMU_QEMU_HARDDISK_dea7381b-ad1e-42b1-98ca-267bdb7db168-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea7381b-ad1e-42b1-98ca-267bdb7db168-part14', 'scsi-SQEMU_QEMU_HARDDISK_dea7381b-ad1e-42b1-98ca-267bdb7db168-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea7381b-ad1e-42b1-98ca-267bdb7db168-part15', 'scsi-SQEMU_QEMU_HARDDISK_dea7381b-ad1e-42b1-98ca-267bdb7db168-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dea7381b-ad1e-42b1-98ca-267bdb7db168-part16', 'scsi-SQEMU_QEMU_HARDDISK_dea7381b-ad1e-42b1-98ca-267bdb7db168-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.709307 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--7331b6c9--9d3b--5dac--8499--53ee0940f196-osd--block--7331b6c9--9d3b--5dac--8499--53ee0940f196'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rVbr9f-Be4n-dgvH-W7EQ-qBne-SxNz-hN6c4z', 'scsi-0QEMU_QEMU_HARDDISK_3fbef31d-44a1-4ae9-9145-86033c094687', 'scsi-SQEMU_QEMU_HARDDISK_3fbef31d-44a1-4ae9-9145-86033c094687'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.709319 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-13-00-02-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.709327 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d506fd3a-4f98-4a08-a2bf-c3638f88932b', 'scsi-SQEMU_QEMU_HARDDISK_d506fd3a-4f98-4a08-a2bf-c3638f88932b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.709403 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-13-00-02-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.709418 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.709430 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.709446 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.709465 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.709477 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.709488 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.709501 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.709509 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.709567 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.709577 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.709590 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97e6821d-280a-4d97-99b4-3ef3a3e75d06', 'scsi-SQEMU_QEMU_HARDDISK_97e6821d-280a-4d97-99b4-3ef3a3e75d06'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97e6821d-280a-4d97-99b4-3ef3a3e75d06-part1', 'scsi-SQEMU_QEMU_HARDDISK_97e6821d-280a-4d97-99b4-3ef3a3e75d06-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97e6821d-280a-4d97-99b4-3ef3a3e75d06-part14', 'scsi-SQEMU_QEMU_HARDDISK_97e6821d-280a-4d97-99b4-3ef3a3e75d06-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97e6821d-280a-4d97-99b4-3ef3a3e75d06-part15', 'scsi-SQEMU_QEMU_HARDDISK_97e6821d-280a-4d97-99b4-3ef3a3e75d06-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_97e6821d-280a-4d97-99b4-3ef3a3e75d06-part16', 'scsi-SQEMU_QEMU_HARDDISK_97e6821d-280a-4d97-99b4-3ef3a3e75d06-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.709652 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-13-00-02-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.709662 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.709669 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.709676 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.709700 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.709713 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.709720 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.709727 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.709777 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.709787 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.709800 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdc8ba01-4f9f-45f9-bedc-50cd21a5940b', 'scsi-SQEMU_QEMU_HARDDISK_cdc8ba01-4f9f-45f9-bedc-50cd21a5940b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdc8ba01-4f9f-45f9-bedc-50cd21a5940b-part1', 'scsi-SQEMU_QEMU_HARDDISK_cdc8ba01-4f9f-45f9-bedc-50cd21a5940b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdc8ba01-4f9f-45f9-bedc-50cd21a5940b-part14', 'scsi-SQEMU_QEMU_HARDDISK_cdc8ba01-4f9f-45f9-bedc-50cd21a5940b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdc8ba01-4f9f-45f9-bedc-50cd21a5940b-part15', 'scsi-SQEMU_QEMU_HARDDISK_cdc8ba01-4f9f-45f9-bedc-50cd21a5940b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdc8ba01-4f9f-45f9-bedc-50cd21a5940b-part16', 'scsi-SQEMU_QEMU_HARDDISK_cdc8ba01-4f9f-45f9-bedc-50cd21a5940b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.709812 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-13-00-03-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 00:58:09.709820 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.709826 | orchestrator | 2026-04-13 00:58:09.709875 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-13 00:58:09.709885 | orchestrator | Monday 13 April 2026 00:46:58 +0000 (0:00:00.999) 0:00:35.112 ********** 2026-04-13 00:58:09.709892 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.709899 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.709906 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.709912 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.709919 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.709926 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.709932 | orchestrator | 2026-04-13 00:58:09.709948 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-13 00:58:09.709955 | orchestrator | Monday 13 April 2026 00:46:59 +0000 (0:00:01.131) 0:00:36.243 ********** 2026-04-13 00:58:09.709962 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.709968 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.709975 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.709981 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.709988 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.709995 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.710006 | orchestrator | 2026-04-13 00:58:09.710013 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-13 00:58:09.710046 | orchestrator | Monday 13 April 2026 00:47:00 +0000 (0:00:00.669) 0:00:36.913 ********** 2026-04-13 00:58:09.710053 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.710060 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.710067 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.710073 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.710080 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.710086 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.710093 | orchestrator | 2026-04-13 00:58:09.710099 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-13 00:58:09.710106 | orchestrator | Monday 13 April 2026 00:47:01 +0000 (0:00:01.655) 0:00:38.569 ********** 2026-04-13 00:58:09.710121 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.710127 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.710134 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.710140 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.710147 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.710153 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.710160 | orchestrator | 2026-04-13 00:58:09.710171 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-13 00:58:09.710177 | orchestrator | Monday 13 April 2026 00:47:02 +0000 (0:00:00.901) 0:00:39.471 ********** 2026-04-13 00:58:09.710184 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.710191 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.710197 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.710204 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.710211 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.710217 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.710224 | orchestrator | 2026-04-13 00:58:09.710230 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-13 00:58:09.710237 | orchestrator | Monday 13 April 2026 00:47:04 +0000 (0:00:01.727) 0:00:41.198 ********** 2026-04-13 00:58:09.710244 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.710250 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.710257 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.710263 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.710270 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.710276 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.710283 | orchestrator | 2026-04-13 00:58:09.710289 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-13 00:58:09.710296 | orchestrator | Monday 13 April 2026 00:47:06 +0000 (0:00:02.301) 0:00:43.500 ********** 2026-04-13 00:58:09.710303 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-13 00:58:09.710309 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-13 00:58:09.710316 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-13 00:58:09.710323 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-13 00:58:09.710329 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-13 00:58:09.710379 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-13 00:58:09.710386 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-13 00:58:09.710393 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-13 00:58:09.710399 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-13 00:58:09.710406 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-13 00:58:09.710413 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-13 00:58:09.710419 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-13 00:58:09.710426 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-13 00:58:09.710432 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-13 00:58:09.710445 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-13 00:58:09.710452 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-13 00:58:09.710458 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-13 00:58:09.710465 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-13 00:58:09.710471 | orchestrator | 2026-04-13 00:58:09.710478 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-13 00:58:09.710485 | orchestrator | Monday 13 April 2026 00:47:14 +0000 (0:00:07.849) 0:00:51.350 ********** 2026-04-13 00:58:09.710491 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-13 00:58:09.710498 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-13 00:58:09.710505 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-13 00:58:09.710513 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.710520 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-13 00:58:09.710528 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-13 00:58:09.710536 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-13 00:58:09.710544 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-13 00:58:09.710610 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-13 00:58:09.710622 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-13 00:58:09.710630 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.710638 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-13 00:58:09.710646 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-13 00:58:09.710655 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-13 00:58:09.710663 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.710671 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-13 00:58:09.710679 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-13 00:58:09.710687 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.710695 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-13 00:58:09.710703 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.710710 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-13 00:58:09.710718 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-13 00:58:09.710725 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-13 00:58:09.710733 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.710740 | orchestrator | 2026-04-13 00:58:09.710747 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-13 00:58:09.710755 | orchestrator | Monday 13 April 2026 00:47:16 +0000 (0:00:02.098) 0:00:53.448 ********** 2026-04-13 00:58:09.710763 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.710770 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.710777 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.710785 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:58:09.710793 | orchestrator | 2026-04-13 00:58:09.710800 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-13 00:58:09.710808 | orchestrator | Monday 13 April 2026 00:47:19 +0000 (0:00:02.301) 0:00:55.750 ********** 2026-04-13 00:58:09.710819 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.710826 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.710834 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.710841 | orchestrator | 2026-04-13 00:58:09.710848 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-13 00:58:09.710855 | orchestrator | Monday 13 April 2026 00:47:19 +0000 (0:00:00.616) 0:00:56.366 ********** 2026-04-13 00:58:09.710863 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.710875 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.710883 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.710890 | orchestrator | 2026-04-13 00:58:09.710896 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-13 00:58:09.710903 | orchestrator | Monday 13 April 2026 00:47:20 +0000 (0:00:00.584) 0:00:56.950 ********** 2026-04-13 00:58:09.710909 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.710915 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.710922 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.710928 | orchestrator | 2026-04-13 00:58:09.710934 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-13 00:58:09.710941 | orchestrator | Monday 13 April 2026 00:47:21 +0000 (0:00:00.919) 0:00:57.870 ********** 2026-04-13 00:58:09.710947 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.710954 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.710960 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.710966 | orchestrator | 2026-04-13 00:58:09.710982 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-13 00:58:09.710989 | orchestrator | Monday 13 April 2026 00:47:22 +0000 (0:00:01.653) 0:00:59.524 ********** 2026-04-13 00:58:09.710995 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-13 00:58:09.711001 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-13 00:58:09.711007 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-13 00:58:09.711013 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.711019 | orchestrator | 2026-04-13 00:58:09.711026 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-13 00:58:09.711032 | orchestrator | Monday 13 April 2026 00:47:23 +0000 (0:00:00.639) 0:01:00.163 ********** 2026-04-13 00:58:09.711038 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-13 00:58:09.711044 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-13 00:58:09.711050 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-13 00:58:09.711056 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.711062 | orchestrator | 2026-04-13 00:58:09.711069 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-13 00:58:09.711075 | orchestrator | Monday 13 April 2026 00:47:24 +0000 (0:00:01.223) 0:01:01.387 ********** 2026-04-13 00:58:09.711081 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-13 00:58:09.711087 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-13 00:58:09.711093 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-13 00:58:09.711099 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.711105 | orchestrator | 2026-04-13 00:58:09.711112 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-13 00:58:09.711118 | orchestrator | Monday 13 April 2026 00:47:25 +0000 (0:00:00.569) 0:01:01.956 ********** 2026-04-13 00:58:09.711124 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.711130 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.711136 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.711142 | orchestrator | 2026-04-13 00:58:09.711148 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-13 00:58:09.711154 | orchestrator | Monday 13 April 2026 00:47:25 +0000 (0:00:00.452) 0:01:02.408 ********** 2026-04-13 00:58:09.711161 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-13 00:58:09.711167 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-13 00:58:09.711193 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-13 00:58:09.711200 | orchestrator | 2026-04-13 00:58:09.711206 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-13 00:58:09.711213 | orchestrator | Monday 13 April 2026 00:47:26 +0000 (0:00:00.987) 0:01:03.396 ********** 2026-04-13 00:58:09.711219 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-13 00:58:09.711225 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-13 00:58:09.711236 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-13 00:58:09.711242 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-13 00:58:09.711248 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-13 00:58:09.711254 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-13 00:58:09.711260 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-13 00:58:09.711266 | orchestrator | 2026-04-13 00:58:09.711272 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-13 00:58:09.711279 | orchestrator | Monday 13 April 2026 00:47:28 +0000 (0:00:01.340) 0:01:04.737 ********** 2026-04-13 00:58:09.711285 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-13 00:58:09.711291 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-13 00:58:09.711297 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-13 00:58:09.711303 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-13 00:58:09.711309 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-13 00:58:09.711315 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-13 00:58:09.711324 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-13 00:58:09.711331 | orchestrator | 2026-04-13 00:58:09.711350 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-13 00:58:09.711356 | orchestrator | Monday 13 April 2026 00:47:30 +0000 (0:00:02.858) 0:01:07.595 ********** 2026-04-13 00:58:09.711363 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:58:09.711371 | orchestrator | 2026-04-13 00:58:09.711377 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-13 00:58:09.711383 | orchestrator | Monday 13 April 2026 00:47:32 +0000 (0:00:01.528) 0:01:09.123 ********** 2026-04-13 00:58:09.711389 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:58:09.711396 | orchestrator | 2026-04-13 00:58:09.711402 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-13 00:58:09.711408 | orchestrator | Monday 13 April 2026 00:47:33 +0000 (0:00:01.130) 0:01:10.253 ********** 2026-04-13 00:58:09.711414 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.711420 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.711427 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.711433 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.711439 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.711445 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.711451 | orchestrator | 2026-04-13 00:58:09.711457 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-13 00:58:09.711463 | orchestrator | Monday 13 April 2026 00:47:34 +0000 (0:00:01.293) 0:01:11.547 ********** 2026-04-13 00:58:09.711470 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.711476 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.711482 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.711488 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.711494 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.711500 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.711506 | orchestrator | 2026-04-13 00:58:09.711513 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-13 00:58:09.711519 | orchestrator | Monday 13 April 2026 00:47:35 +0000 (0:00:01.028) 0:01:12.576 ********** 2026-04-13 00:58:09.711540 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.711546 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.711552 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.711559 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.711565 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.711571 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.711577 | orchestrator | 2026-04-13 00:58:09.711583 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-13 00:58:09.711589 | orchestrator | Monday 13 April 2026 00:47:36 +0000 (0:00:00.659) 0:01:13.235 ********** 2026-04-13 00:58:09.711596 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.711602 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.711608 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.711614 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.711620 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.711626 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.711632 | orchestrator | 2026-04-13 00:58:09.711638 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-13 00:58:09.711644 | orchestrator | Monday 13 April 2026 00:47:37 +0000 (0:00:00.945) 0:01:14.180 ********** 2026-04-13 00:58:09.711651 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.711657 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.711663 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.711669 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.711675 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.711700 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.711707 | orchestrator | 2026-04-13 00:58:09.711714 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-13 00:58:09.711720 | orchestrator | Monday 13 April 2026 00:47:38 +0000 (0:00:01.066) 0:01:15.247 ********** 2026-04-13 00:58:09.711726 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.711732 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.711738 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.711744 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.711751 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.711757 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.711763 | orchestrator | 2026-04-13 00:58:09.711769 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-13 00:58:09.711775 | orchestrator | Monday 13 April 2026 00:47:39 +0000 (0:00:00.940) 0:01:16.187 ********** 2026-04-13 00:58:09.711781 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.711787 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.711793 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.711799 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.711805 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.711811 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.711818 | orchestrator | 2026-04-13 00:58:09.711824 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-13 00:58:09.711830 | orchestrator | Monday 13 April 2026 00:47:40 +0000 (0:00:00.581) 0:01:16.768 ********** 2026-04-13 00:58:09.711836 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.711842 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.711848 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.711855 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.711861 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.711867 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.711873 | orchestrator | 2026-04-13 00:58:09.711879 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-13 00:58:09.711885 | orchestrator | Monday 13 April 2026 00:47:41 +0000 (0:00:01.445) 0:01:18.214 ********** 2026-04-13 00:58:09.711891 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.711897 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.711903 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.711909 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.711920 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.711930 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.711936 | orchestrator | 2026-04-13 00:58:09.711942 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-13 00:58:09.711948 | orchestrator | Monday 13 April 2026 00:47:42 +0000 (0:00:01.093) 0:01:19.307 ********** 2026-04-13 00:58:09.711954 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.711961 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.711967 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.711973 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.711979 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.711985 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.711991 | orchestrator | 2026-04-13 00:58:09.711997 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-13 00:58:09.712003 | orchestrator | Monday 13 April 2026 00:47:43 +0000 (0:00:00.835) 0:01:20.142 ********** 2026-04-13 00:58:09.712009 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.712015 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.712021 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.712028 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.712034 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.712040 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.712046 | orchestrator | 2026-04-13 00:58:09.712052 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-13 00:58:09.712058 | orchestrator | Monday 13 April 2026 00:47:44 +0000 (0:00:00.696) 0:01:20.838 ********** 2026-04-13 00:58:09.712064 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.712071 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.712076 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.712083 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.712089 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.712095 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.712101 | orchestrator | 2026-04-13 00:58:09.712107 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-13 00:58:09.712113 | orchestrator | Monday 13 April 2026 00:47:45 +0000 (0:00:01.087) 0:01:21.926 ********** 2026-04-13 00:58:09.712119 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.712128 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.712137 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.712147 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.712158 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.712168 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.712177 | orchestrator | 2026-04-13 00:58:09.712186 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-13 00:58:09.712196 | orchestrator | Monday 13 April 2026 00:47:45 +0000 (0:00:00.661) 0:01:22.587 ********** 2026-04-13 00:58:09.712205 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.712215 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.712223 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.712234 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.712243 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.712253 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.712263 | orchestrator | 2026-04-13 00:58:09.712272 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-13 00:58:09.712283 | orchestrator | Monday 13 April 2026 00:47:46 +0000 (0:00:00.932) 0:01:23.520 ********** 2026-04-13 00:58:09.712293 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.712303 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.712312 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.712319 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.712325 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.712331 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.712376 | orchestrator | 2026-04-13 00:58:09.712383 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-13 00:58:09.712398 | orchestrator | Monday 13 April 2026 00:47:47 +0000 (0:00:00.720) 0:01:24.241 ********** 2026-04-13 00:58:09.712405 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.712411 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.712417 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.712423 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.712456 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.712463 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.712469 | orchestrator | 2026-04-13 00:58:09.712475 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-13 00:58:09.712482 | orchestrator | Monday 13 April 2026 00:47:49 +0000 (0:00:01.473) 0:01:25.714 ********** 2026-04-13 00:58:09.712488 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.712494 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.712500 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.712506 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.712512 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.712518 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.712524 | orchestrator | 2026-04-13 00:58:09.712530 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-13 00:58:09.712537 | orchestrator | Monday 13 April 2026 00:47:49 +0000 (0:00:00.848) 0:01:26.563 ********** 2026-04-13 00:58:09.712543 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.712549 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.712555 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.712561 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.712567 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.712573 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.712579 | orchestrator | 2026-04-13 00:58:09.712585 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-13 00:58:09.712591 | orchestrator | Monday 13 April 2026 00:47:51 +0000 (0:00:01.136) 0:01:27.699 ********** 2026-04-13 00:58:09.712598 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.712604 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.712610 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.712616 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.712621 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.712627 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.712634 | orchestrator | 2026-04-13 00:58:09.712640 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-13 00:58:09.712646 | orchestrator | Monday 13 April 2026 00:47:52 +0000 (0:00:01.661) 0:01:29.361 ********** 2026-04-13 00:58:09.712652 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:58:09.712658 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:58:09.712664 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:58:09.712670 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:58:09.712681 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:58:09.712687 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:58:09.712693 | orchestrator | 2026-04-13 00:58:09.712699 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-13 00:58:09.712705 | orchestrator | Monday 13 April 2026 00:47:54 +0000 (0:00:02.039) 0:01:31.401 ********** 2026-04-13 00:58:09.712712 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:58:09.712718 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:58:09.712724 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:58:09.712730 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:58:09.712736 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:58:09.712742 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:58:09.712748 | orchestrator | 2026-04-13 00:58:09.712754 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-13 00:58:09.712760 | orchestrator | Monday 13 April 2026 00:47:57 +0000 (0:00:02.482) 0:01:33.884 ********** 2026-04-13 00:58:09.712766 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:58:09.712777 | orchestrator | 2026-04-13 00:58:09.712784 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-13 00:58:09.712790 | orchestrator | Monday 13 April 2026 00:47:58 +0000 (0:00:01.301) 0:01:35.185 ********** 2026-04-13 00:58:09.712796 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.712802 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.712808 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.712814 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.712820 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.712826 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.712832 | orchestrator | 2026-04-13 00:58:09.712838 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-13 00:58:09.712844 | orchestrator | Monday 13 April 2026 00:47:59 +0000 (0:00:00.883) 0:01:36.068 ********** 2026-04-13 00:58:09.712851 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.712857 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.712863 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.712869 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.712875 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.712881 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.712887 | orchestrator | 2026-04-13 00:58:09.712893 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-13 00:58:09.712900 | orchestrator | Monday 13 April 2026 00:48:00 +0000 (0:00:00.658) 0:01:36.726 ********** 2026-04-13 00:58:09.712905 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-13 00:58:09.712910 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-13 00:58:09.712916 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-13 00:58:09.712921 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-13 00:58:09.712926 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-13 00:58:09.712932 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-13 00:58:09.712937 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-13 00:58:09.712943 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-13 00:58:09.712948 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-13 00:58:09.712953 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-13 00:58:09.712975 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-13 00:58:09.712981 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-13 00:58:09.712986 | orchestrator | 2026-04-13 00:58:09.712992 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-13 00:58:09.712997 | orchestrator | Monday 13 April 2026 00:48:01 +0000 (0:00:01.872) 0:01:38.599 ********** 2026-04-13 00:58:09.713003 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:58:09.713008 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:58:09.713013 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:58:09.713019 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:58:09.713024 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:58:09.713029 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:58:09.713035 | orchestrator | 2026-04-13 00:58:09.713040 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-13 00:58:09.713046 | orchestrator | Monday 13 April 2026 00:48:03 +0000 (0:00:01.442) 0:01:40.042 ********** 2026-04-13 00:58:09.713051 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.713056 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.713061 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.713071 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.713076 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.713081 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.713087 | orchestrator | 2026-04-13 00:58:09.713092 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-13 00:58:09.713098 | orchestrator | Monday 13 April 2026 00:48:04 +0000 (0:00:00.974) 0:01:41.016 ********** 2026-04-13 00:58:09.713103 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.713108 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.713114 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.713119 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.713125 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.713130 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.713135 | orchestrator | 2026-04-13 00:58:09.713141 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-13 00:58:09.713150 | orchestrator | Monday 13 April 2026 00:48:05 +0000 (0:00:00.690) 0:01:41.707 ********** 2026-04-13 00:58:09.713155 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.713160 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.713166 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.713171 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.713177 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.713182 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.713187 | orchestrator | 2026-04-13 00:58:09.713193 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-13 00:58:09.713198 | orchestrator | Monday 13 April 2026 00:48:05 +0000 (0:00:00.912) 0:01:42.619 ********** 2026-04-13 00:58:09.713204 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:58:09.713209 | orchestrator | 2026-04-13 00:58:09.713215 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-13 00:58:09.713220 | orchestrator | Monday 13 April 2026 00:48:07 +0000 (0:00:01.300) 0:01:43.920 ********** 2026-04-13 00:58:09.713226 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.713231 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.713236 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.713242 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.713247 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.713252 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.713258 | orchestrator | 2026-04-13 00:58:09.713263 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-13 00:58:09.713269 | orchestrator | Monday 13 April 2026 00:49:14 +0000 (0:01:07.315) 0:02:51.236 ********** 2026-04-13 00:58:09.713274 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-13 00:58:09.713280 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-13 00:58:09.713285 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-13 00:58:09.713290 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.713296 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-13 00:58:09.713301 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-13 00:58:09.713307 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-13 00:58:09.713312 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.713317 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-13 00:58:09.713323 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-13 00:58:09.713328 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-13 00:58:09.713349 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-13 00:58:09.713362 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-13 00:58:09.713367 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-13 00:58:09.713372 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.713378 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-13 00:58:09.713383 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-13 00:58:09.713388 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-13 00:58:09.713394 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.713399 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.713422 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-13 00:58:09.713429 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-13 00:58:09.713434 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-13 00:58:09.713440 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.713445 | orchestrator | 2026-04-13 00:58:09.713450 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-13 00:58:09.713456 | orchestrator | Monday 13 April 2026 00:49:15 +0000 (0:00:00.971) 0:02:52.207 ********** 2026-04-13 00:58:09.713461 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.713467 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.713472 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.713477 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.713483 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.713488 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.713493 | orchestrator | 2026-04-13 00:58:09.713499 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-13 00:58:09.713504 | orchestrator | Monday 13 April 2026 00:49:16 +0000 (0:00:00.615) 0:02:52.823 ********** 2026-04-13 00:58:09.713510 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.713515 | orchestrator | 2026-04-13 00:58:09.713520 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-13 00:58:09.713526 | orchestrator | Monday 13 April 2026 00:49:16 +0000 (0:00:00.151) 0:02:52.974 ********** 2026-04-13 00:58:09.713531 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.713537 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.713542 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.713547 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.713555 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.713565 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.713573 | orchestrator | 2026-04-13 00:58:09.713582 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-13 00:58:09.713589 | orchestrator | Monday 13 April 2026 00:49:17 +0000 (0:00:00.957) 0:02:53.932 ********** 2026-04-13 00:58:09.713594 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.713600 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.713605 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.713614 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.713619 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.713625 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.713630 | orchestrator | 2026-04-13 00:58:09.713635 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-13 00:58:09.713641 | orchestrator | Monday 13 April 2026 00:49:17 +0000 (0:00:00.662) 0:02:54.594 ********** 2026-04-13 00:58:09.713646 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.713655 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.713663 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.713676 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.713689 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.713697 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.713713 | orchestrator | 2026-04-13 00:58:09.713721 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-13 00:58:09.713730 | orchestrator | Monday 13 April 2026 00:49:18 +0000 (0:00:00.851) 0:02:55.445 ********** 2026-04-13 00:58:09.713739 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.713747 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.713754 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.713761 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.713770 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.713779 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.713788 | orchestrator | 2026-04-13 00:58:09.713796 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-13 00:58:09.713805 | orchestrator | Monday 13 April 2026 00:49:21 +0000 (0:00:03.174) 0:02:58.619 ********** 2026-04-13 00:58:09.713813 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.713821 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.713830 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.713838 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.713846 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.713854 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.713862 | orchestrator | 2026-04-13 00:58:09.713871 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-13 00:58:09.713879 | orchestrator | Monday 13 April 2026 00:49:22 +0000 (0:00:00.864) 0:02:59.484 ********** 2026-04-13 00:58:09.713889 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:58:09.713899 | orchestrator | 2026-04-13 00:58:09.713907 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-13 00:58:09.713917 | orchestrator | Monday 13 April 2026 00:49:24 +0000 (0:00:01.347) 0:03:00.831 ********** 2026-04-13 00:58:09.713925 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.713934 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.713943 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.713953 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.713958 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.713963 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.713969 | orchestrator | 2026-04-13 00:58:09.713974 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-13 00:58:09.713980 | orchestrator | Monday 13 April 2026 00:49:24 +0000 (0:00:00.656) 0:03:01.487 ********** 2026-04-13 00:58:09.713985 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.713990 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.713996 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.714001 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.714006 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.714012 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.714041 | orchestrator | 2026-04-13 00:58:09.714046 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-13 00:58:09.714052 | orchestrator | Monday 13 April 2026 00:49:25 +0000 (0:00:00.884) 0:03:02.372 ********** 2026-04-13 00:58:09.714057 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.714063 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.714097 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.714104 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.714109 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.714115 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.714120 | orchestrator | 2026-04-13 00:58:09.714125 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-13 00:58:09.714131 | orchestrator | Monday 13 April 2026 00:49:26 +0000 (0:00:00.707) 0:03:03.080 ********** 2026-04-13 00:58:09.714136 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.714142 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.714147 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.714158 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.714163 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.714169 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.714174 | orchestrator | 2026-04-13 00:58:09.714180 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-13 00:58:09.714185 | orchestrator | Monday 13 April 2026 00:49:27 +0000 (0:00:01.080) 0:03:04.161 ********** 2026-04-13 00:58:09.714190 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.714196 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.714201 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.714207 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.714212 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.714217 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.714223 | orchestrator | 2026-04-13 00:58:09.714228 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-13 00:58:09.714233 | orchestrator | Monday 13 April 2026 00:49:28 +0000 (0:00:00.719) 0:03:04.880 ********** 2026-04-13 00:58:09.714239 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.714244 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.714250 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.714255 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.714260 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.714266 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.714271 | orchestrator | 2026-04-13 00:58:09.714276 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-13 00:58:09.714282 | orchestrator | Monday 13 April 2026 00:49:29 +0000 (0:00:00.917) 0:03:05.798 ********** 2026-04-13 00:58:09.714287 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.714297 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.714303 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.714308 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.714313 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.714319 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.714324 | orchestrator | 2026-04-13 00:58:09.714330 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-13 00:58:09.714351 | orchestrator | Monday 13 April 2026 00:49:30 +0000 (0:00:00.887) 0:03:06.685 ********** 2026-04-13 00:58:09.714357 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.714362 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.714367 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.714373 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.714378 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.714384 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.714389 | orchestrator | 2026-04-13 00:58:09.714394 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-13 00:58:09.714400 | orchestrator | Monday 13 April 2026 00:49:31 +0000 (0:00:01.332) 0:03:08.018 ********** 2026-04-13 00:58:09.714405 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.714411 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.714416 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.714422 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.714427 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.714432 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.714438 | orchestrator | 2026-04-13 00:58:09.714443 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-13 00:58:09.714449 | orchestrator | Monday 13 April 2026 00:49:32 +0000 (0:00:01.246) 0:03:09.265 ********** 2026-04-13 00:58:09.714454 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:58:09.714460 | orchestrator | 2026-04-13 00:58:09.714466 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-13 00:58:09.714471 | orchestrator | Monday 13 April 2026 00:49:33 +0000 (0:00:01.384) 0:03:10.649 ********** 2026-04-13 00:58:09.714481 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-04-13 00:58:09.714487 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-13 00:58:09.714492 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-04-13 00:58:09.714497 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-04-13 00:58:09.714503 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-04-13 00:58:09.714508 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-13 00:58:09.714514 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-13 00:58:09.714519 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-04-13 00:58:09.714525 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-04-13 00:58:09.714530 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-13 00:58:09.714535 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-13 00:58:09.714541 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-13 00:58:09.714546 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-13 00:58:09.714552 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-13 00:58:09.714557 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-13 00:58:09.714563 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-13 00:58:09.714568 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-13 00:58:09.714574 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-13 00:58:09.714597 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-13 00:58:09.714603 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-13 00:58:09.714609 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-13 00:58:09.714614 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-13 00:58:09.714619 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-13 00:58:09.714625 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-13 00:58:09.714630 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-13 00:58:09.714635 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-13 00:58:09.714641 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-13 00:58:09.714646 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-13 00:58:09.714652 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-13 00:58:09.714657 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-13 00:58:09.714662 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-13 00:58:09.714668 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-13 00:58:09.714673 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-13 00:58:09.714678 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-13 00:58:09.714683 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-13 00:58:09.714689 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-13 00:58:09.714694 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-13 00:58:09.714700 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-13 00:58:09.714705 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-13 00:58:09.714710 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-13 00:58:09.714716 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-13 00:58:09.714724 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-13 00:58:09.714730 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-13 00:58:09.714735 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-13 00:58:09.714745 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-13 00:58:09.714750 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-13 00:58:09.714755 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-13 00:58:09.714761 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-13 00:58:09.714766 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-13 00:58:09.714771 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-13 00:58:09.714777 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-13 00:58:09.714782 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-13 00:58:09.714787 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-13 00:58:09.714793 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-13 00:58:09.714798 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-13 00:58:09.714803 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-13 00:58:09.714809 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-13 00:58:09.714814 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-13 00:58:09.714819 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-13 00:58:09.714824 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-13 00:58:09.714830 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-13 00:58:09.714835 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-13 00:58:09.714841 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-13 00:58:09.714846 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-13 00:58:09.714851 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-13 00:58:09.714860 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-13 00:58:09.714869 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-13 00:58:09.714878 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-13 00:58:09.714887 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-13 00:58:09.714895 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-13 00:58:09.714904 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-13 00:58:09.714913 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-13 00:58:09.714922 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-13 00:58:09.714930 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-13 00:58:09.714939 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-13 00:58:09.714949 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-13 00:58:09.714984 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-04-13 00:58:09.714994 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-13 00:58:09.715000 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-13 00:58:09.715005 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-13 00:58:09.715011 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-13 00:58:09.715016 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-13 00:58:09.715021 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-04-13 00:58:09.715027 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-04-13 00:58:09.715038 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-13 00:58:09.715043 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-13 00:58:09.715048 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-04-13 00:58:09.715054 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-04-13 00:58:09.715059 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-13 00:58:09.715065 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-04-13 00:58:09.715070 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-04-13 00:58:09.715075 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-04-13 00:58:09.715081 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-04-13 00:58:09.715086 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-04-13 00:58:09.715091 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-04-13 00:58:09.715096 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-04-13 00:58:09.715102 | orchestrator | 2026-04-13 00:58:09.715107 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-13 00:58:09.715113 | orchestrator | Monday 13 April 2026 00:49:41 +0000 (0:00:07.110) 0:03:17.760 ********** 2026-04-13 00:58:09.715122 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.715127 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.715132 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.715138 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:58:09.715144 | orchestrator | 2026-04-13 00:58:09.715149 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-13 00:58:09.715155 | orchestrator | Monday 13 April 2026 00:49:42 +0000 (0:00:01.187) 0:03:18.947 ********** 2026-04-13 00:58:09.715160 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-13 00:58:09.715166 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-13 00:58:09.715171 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-13 00:58:09.715176 | orchestrator | 2026-04-13 00:58:09.715182 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-13 00:58:09.715187 | orchestrator | Monday 13 April 2026 00:49:43 +0000 (0:00:00.828) 0:03:19.776 ********** 2026-04-13 00:58:09.715192 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-13 00:58:09.715198 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-13 00:58:09.715203 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-13 00:58:09.715208 | orchestrator | 2026-04-13 00:58:09.715214 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-13 00:58:09.715219 | orchestrator | Monday 13 April 2026 00:49:44 +0000 (0:00:01.304) 0:03:21.080 ********** 2026-04-13 00:58:09.715224 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.715230 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.715235 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.715240 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.715246 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.715251 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.715260 | orchestrator | 2026-04-13 00:58:09.715269 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-13 00:58:09.715289 | orchestrator | Monday 13 April 2026 00:49:45 +0000 (0:00:00.670) 0:03:21.751 ********** 2026-04-13 00:58:09.715299 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.715305 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.715310 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.715316 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.715321 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.715326 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.715346 | orchestrator | 2026-04-13 00:58:09.715352 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-13 00:58:09.715357 | orchestrator | Monday 13 April 2026 00:49:45 +0000 (0:00:00.550) 0:03:22.301 ********** 2026-04-13 00:58:09.715363 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.715368 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.715373 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.715378 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.715384 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.715389 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.715394 | orchestrator | 2026-04-13 00:58:09.715420 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-13 00:58:09.715427 | orchestrator | Monday 13 April 2026 00:49:46 +0000 (0:00:00.729) 0:03:23.031 ********** 2026-04-13 00:58:09.715432 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.715438 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.715443 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.715451 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.715460 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.715468 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.715477 | orchestrator | 2026-04-13 00:58:09.715486 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-13 00:58:09.715494 | orchestrator | Monday 13 April 2026 00:49:46 +0000 (0:00:00.540) 0:03:23.571 ********** 2026-04-13 00:58:09.715504 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.715510 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.715515 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.715520 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.715525 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.715531 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.715536 | orchestrator | 2026-04-13 00:58:09.715541 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-13 00:58:09.715547 | orchestrator | Monday 13 April 2026 00:49:47 +0000 (0:00:00.876) 0:03:24.447 ********** 2026-04-13 00:58:09.715552 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.715558 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.715563 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.715568 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.715573 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.715578 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.715584 | orchestrator | 2026-04-13 00:58:09.715589 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-13 00:58:09.715594 | orchestrator | Monday 13 April 2026 00:49:48 +0000 (0:00:00.663) 0:03:25.111 ********** 2026-04-13 00:58:09.715600 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.715605 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.715610 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.715616 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.715625 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.715630 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.715635 | orchestrator | 2026-04-13 00:58:09.715641 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-13 00:58:09.715646 | orchestrator | Monday 13 April 2026 00:49:49 +0000 (0:00:01.080) 0:03:26.191 ********** 2026-04-13 00:58:09.715656 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.715662 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.715667 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.715672 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.715678 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.715683 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.715688 | orchestrator | 2026-04-13 00:58:09.715694 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-13 00:58:09.715699 | orchestrator | Monday 13 April 2026 00:49:50 +0000 (0:00:00.757) 0:03:26.948 ********** 2026-04-13 00:58:09.715704 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.715709 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.715715 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.715720 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.715725 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.715731 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.715736 | orchestrator | 2026-04-13 00:58:09.715741 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-13 00:58:09.715747 | orchestrator | Monday 13 April 2026 00:49:53 +0000 (0:00:03.114) 0:03:30.063 ********** 2026-04-13 00:58:09.715752 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.715757 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.715763 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.715768 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.715773 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.715779 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.715784 | orchestrator | 2026-04-13 00:58:09.715789 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-13 00:58:09.715794 | orchestrator | Monday 13 April 2026 00:49:54 +0000 (0:00:00.656) 0:03:30.719 ********** 2026-04-13 00:58:09.715800 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.715805 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.715811 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.715816 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.715821 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.715826 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.715832 | orchestrator | 2026-04-13 00:58:09.715837 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-13 00:58:09.715843 | orchestrator | Monday 13 April 2026 00:49:54 +0000 (0:00:00.741) 0:03:31.460 ********** 2026-04-13 00:58:09.715848 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.715853 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.715858 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.715864 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.715869 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.715874 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.715880 | orchestrator | 2026-04-13 00:58:09.715885 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-13 00:58:09.715890 | orchestrator | Monday 13 April 2026 00:49:55 +0000 (0:00:00.602) 0:03:32.063 ********** 2026-04-13 00:58:09.715895 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-13 00:58:09.715901 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-13 00:58:09.715906 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-13 00:58:09.715912 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.715936 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.715943 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.715948 | orchestrator | 2026-04-13 00:58:09.715954 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-13 00:58:09.715959 | orchestrator | Monday 13 April 2026 00:49:55 +0000 (0:00:00.498) 0:03:32.562 ********** 2026-04-13 00:58:09.715970 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-04-13 00:58:09.715977 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-04-13 00:58:09.715984 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.715990 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-04-13 00:58:09.715999 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-04-13 00:58:09.716005 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-04-13 00:58:09.716011 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-04-13 00:58:09.716016 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.716021 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.716027 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.716032 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.716037 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.716043 | orchestrator | 2026-04-13 00:58:09.716048 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-13 00:58:09.716053 | orchestrator | Monday 13 April 2026 00:49:56 +0000 (0:00:00.730) 0:03:33.292 ********** 2026-04-13 00:58:09.716059 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.716064 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.716070 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.716075 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.716080 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.716086 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.716091 | orchestrator | 2026-04-13 00:58:09.716096 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-13 00:58:09.716102 | orchestrator | Monday 13 April 2026 00:49:57 +0000 (0:00:00.559) 0:03:33.852 ********** 2026-04-13 00:58:09.716107 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.716112 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.716118 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.716123 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.716128 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.716134 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.716139 | orchestrator | 2026-04-13 00:58:09.716145 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-13 00:58:09.716150 | orchestrator | Monday 13 April 2026 00:49:57 +0000 (0:00:00.798) 0:03:34.650 ********** 2026-04-13 00:58:09.716159 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.716165 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.716170 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.716176 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.716181 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.716186 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.716192 | orchestrator | 2026-04-13 00:58:09.716197 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-13 00:58:09.716202 | orchestrator | Monday 13 April 2026 00:49:58 +0000 (0:00:00.599) 0:03:35.250 ********** 2026-04-13 00:58:09.716208 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.716213 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.716218 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.716224 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.716229 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.716234 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.716240 | orchestrator | 2026-04-13 00:58:09.716245 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-13 00:58:09.716266 | orchestrator | Monday 13 April 2026 00:49:59 +0000 (0:00:00.921) 0:03:36.171 ********** 2026-04-13 00:58:09.716273 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.716278 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.716283 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.716289 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.716294 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.716299 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.716305 | orchestrator | 2026-04-13 00:58:09.716310 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-13 00:58:09.716315 | orchestrator | Monday 13 April 2026 00:50:00 +0000 (0:00:00.608) 0:03:36.779 ********** 2026-04-13 00:58:09.716321 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.716326 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.716367 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.716374 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.716380 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.716385 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.716391 | orchestrator | 2026-04-13 00:58:09.716396 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-13 00:58:09.716401 | orchestrator | Monday 13 April 2026 00:50:01 +0000 (0:00:01.400) 0:03:38.180 ********** 2026-04-13 00:58:09.716407 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-13 00:58:09.716412 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-13 00:58:09.716418 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-13 00:58:09.716423 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.716428 | orchestrator | 2026-04-13 00:58:09.716434 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-13 00:58:09.716439 | orchestrator | Monday 13 April 2026 00:50:01 +0000 (0:00:00.399) 0:03:38.579 ********** 2026-04-13 00:58:09.716444 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-13 00:58:09.716450 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-13 00:58:09.716455 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-13 00:58:09.716460 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.716466 | orchestrator | 2026-04-13 00:58:09.716474 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-13 00:58:09.716480 | orchestrator | Monday 13 April 2026 00:50:02 +0000 (0:00:00.455) 0:03:39.035 ********** 2026-04-13 00:58:09.716485 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-13 00:58:09.716491 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-13 00:58:09.716496 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-13 00:58:09.716506 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.716512 | orchestrator | 2026-04-13 00:58:09.716517 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-13 00:58:09.716522 | orchestrator | Monday 13 April 2026 00:50:02 +0000 (0:00:00.444) 0:03:39.479 ********** 2026-04-13 00:58:09.716528 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.716533 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.716538 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.716544 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.716549 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.716554 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.716559 | orchestrator | 2026-04-13 00:58:09.716565 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-13 00:58:09.716570 | orchestrator | Monday 13 April 2026 00:50:03 +0000 (0:00:01.135) 0:03:40.614 ********** 2026-04-13 00:58:09.716576 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-13 00:58:09.716581 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-13 00:58:09.716586 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-13 00:58:09.716592 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.716597 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-13 00:58:09.716602 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-13 00:58:09.716608 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.716613 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-13 00:58:09.716619 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.716624 | orchestrator | 2026-04-13 00:58:09.716629 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-13 00:58:09.716635 | orchestrator | Monday 13 April 2026 00:50:06 +0000 (0:00:02.070) 0:03:42.684 ********** 2026-04-13 00:58:09.716640 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:58:09.716645 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:58:09.716651 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:58:09.716656 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:58:09.716661 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:58:09.716667 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:58:09.716672 | orchestrator | 2026-04-13 00:58:09.716677 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-13 00:58:09.716683 | orchestrator | Monday 13 April 2026 00:50:08 +0000 (0:00:02.747) 0:03:45.432 ********** 2026-04-13 00:58:09.716688 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:58:09.716693 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:58:09.716699 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:58:09.716704 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:58:09.716709 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:58:09.716715 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:58:09.716720 | orchestrator | 2026-04-13 00:58:09.716726 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-13 00:58:09.716731 | orchestrator | Monday 13 April 2026 00:50:10 +0000 (0:00:01.757) 0:03:47.190 ********** 2026-04-13 00:58:09.716736 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.716742 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.716747 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.716753 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:58:09.716758 | orchestrator | 2026-04-13 00:58:09.716764 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-13 00:58:09.716788 | orchestrator | Monday 13 April 2026 00:50:11 +0000 (0:00:01.080) 0:03:48.271 ********** 2026-04-13 00:58:09.716794 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.716799 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.716805 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.716810 | orchestrator | 2026-04-13 00:58:09.716816 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-13 00:58:09.716825 | orchestrator | Monday 13 April 2026 00:50:12 +0000 (0:00:00.460) 0:03:48.731 ********** 2026-04-13 00:58:09.716830 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:58:09.716835 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:58:09.716841 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:58:09.716846 | orchestrator | 2026-04-13 00:58:09.716852 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-13 00:58:09.716857 | orchestrator | Monday 13 April 2026 00:50:13 +0000 (0:00:01.620) 0:03:50.351 ********** 2026-04-13 00:58:09.716863 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-13 00:58:09.716868 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-13 00:58:09.716873 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-13 00:58:09.716879 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.716884 | orchestrator | 2026-04-13 00:58:09.716889 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-13 00:58:09.716894 | orchestrator | Monday 13 April 2026 00:50:14 +0000 (0:00:00.672) 0:03:51.024 ********** 2026-04-13 00:58:09.716899 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.716903 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.716908 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.716913 | orchestrator | 2026-04-13 00:58:09.716918 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-13 00:58:09.716922 | orchestrator | Monday 13 April 2026 00:50:14 +0000 (0:00:00.390) 0:03:51.414 ********** 2026-04-13 00:58:09.716927 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.716932 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.716937 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.716941 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:58:09.716946 | orchestrator | 2026-04-13 00:58:09.716954 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-13 00:58:09.716959 | orchestrator | Monday 13 April 2026 00:50:15 +0000 (0:00:01.086) 0:03:52.500 ********** 2026-04-13 00:58:09.716964 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-13 00:58:09.716969 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-13 00:58:09.716973 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-13 00:58:09.716978 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.716983 | orchestrator | 2026-04-13 00:58:09.716988 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-13 00:58:09.716992 | orchestrator | Monday 13 April 2026 00:50:16 +0000 (0:00:00.396) 0:03:52.897 ********** 2026-04-13 00:58:09.716997 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.717002 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.717007 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.717011 | orchestrator | 2026-04-13 00:58:09.717016 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-13 00:58:09.717021 | orchestrator | Monday 13 April 2026 00:50:16 +0000 (0:00:00.338) 0:03:53.235 ********** 2026-04-13 00:58:09.717026 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.717030 | orchestrator | 2026-04-13 00:58:09.717035 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-13 00:58:09.717040 | orchestrator | Monday 13 April 2026 00:50:16 +0000 (0:00:00.223) 0:03:53.459 ********** 2026-04-13 00:58:09.717045 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.717049 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.717054 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.717059 | orchestrator | 2026-04-13 00:58:09.717063 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-13 00:58:09.717068 | orchestrator | Monday 13 April 2026 00:50:17 +0000 (0:00:00.314) 0:03:53.773 ********** 2026-04-13 00:58:09.717073 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.717078 | orchestrator | 2026-04-13 00:58:09.717085 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-13 00:58:09.717090 | orchestrator | Monday 13 April 2026 00:50:17 +0000 (0:00:00.235) 0:03:54.008 ********** 2026-04-13 00:58:09.717095 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.717099 | orchestrator | 2026-04-13 00:58:09.717104 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-13 00:58:09.717109 | orchestrator | Monday 13 April 2026 00:50:18 +0000 (0:00:00.796) 0:03:54.805 ********** 2026-04-13 00:58:09.717114 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.717118 | orchestrator | 2026-04-13 00:58:09.717123 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-13 00:58:09.717128 | orchestrator | Monday 13 April 2026 00:50:18 +0000 (0:00:00.168) 0:03:54.973 ********** 2026-04-13 00:58:09.717133 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.717137 | orchestrator | 2026-04-13 00:58:09.717142 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-13 00:58:09.717147 | orchestrator | Monday 13 April 2026 00:50:18 +0000 (0:00:00.230) 0:03:55.204 ********** 2026-04-13 00:58:09.717152 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.717156 | orchestrator | 2026-04-13 00:58:09.717161 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-13 00:58:09.717166 | orchestrator | Monday 13 April 2026 00:50:18 +0000 (0:00:00.241) 0:03:55.446 ********** 2026-04-13 00:58:09.717171 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-13 00:58:09.717176 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-13 00:58:09.717180 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-13 00:58:09.717185 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.717190 | orchestrator | 2026-04-13 00:58:09.717195 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-13 00:58:09.717215 | orchestrator | Monday 13 April 2026 00:50:19 +0000 (0:00:00.418) 0:03:55.865 ********** 2026-04-13 00:58:09.717221 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.717225 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.717230 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.717235 | orchestrator | 2026-04-13 00:58:09.717240 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-13 00:58:09.717244 | orchestrator | Monday 13 April 2026 00:50:19 +0000 (0:00:00.368) 0:03:56.233 ********** 2026-04-13 00:58:09.717249 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.717254 | orchestrator | 2026-04-13 00:58:09.717259 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-13 00:58:09.717264 | orchestrator | Monday 13 April 2026 00:50:19 +0000 (0:00:00.260) 0:03:56.493 ********** 2026-04-13 00:58:09.717268 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.717273 | orchestrator | 2026-04-13 00:58:09.717278 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-13 00:58:09.717282 | orchestrator | Monday 13 April 2026 00:50:20 +0000 (0:00:00.245) 0:03:56.738 ********** 2026-04-13 00:58:09.717287 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.717292 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.717297 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.717301 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:58:09.717306 | orchestrator | 2026-04-13 00:58:09.717311 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-13 00:58:09.717316 | orchestrator | Monday 13 April 2026 00:50:21 +0000 (0:00:01.539) 0:03:58.277 ********** 2026-04-13 00:58:09.717321 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.717325 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.717330 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.717348 | orchestrator | 2026-04-13 00:58:09.717353 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-13 00:58:09.717362 | orchestrator | Monday 13 April 2026 00:50:22 +0000 (0:00:00.403) 0:03:58.681 ********** 2026-04-13 00:58:09.717366 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:58:09.717371 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:58:09.717376 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:58:09.717381 | orchestrator | 2026-04-13 00:58:09.717388 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-13 00:58:09.717393 | orchestrator | Monday 13 April 2026 00:50:23 +0000 (0:00:01.498) 0:04:00.179 ********** 2026-04-13 00:58:09.717398 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-13 00:58:09.717403 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-13 00:58:09.717408 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-13 00:58:09.717413 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.717417 | orchestrator | 2026-04-13 00:58:09.717422 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-13 00:58:09.717427 | orchestrator | Monday 13 April 2026 00:50:24 +0000 (0:00:00.629) 0:04:00.809 ********** 2026-04-13 00:58:09.717432 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.717437 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.717441 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.717446 | orchestrator | 2026-04-13 00:58:09.717451 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-13 00:58:09.717456 | orchestrator | Monday 13 April 2026 00:50:24 +0000 (0:00:00.307) 0:04:01.116 ********** 2026-04-13 00:58:09.717461 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.717465 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.717470 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.717475 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:58:09.717480 | orchestrator | 2026-04-13 00:58:09.717484 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-13 00:58:09.717489 | orchestrator | Monday 13 April 2026 00:50:25 +0000 (0:00:00.850) 0:04:01.966 ********** 2026-04-13 00:58:09.717494 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.717499 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.717504 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.717508 | orchestrator | 2026-04-13 00:58:09.717513 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-13 00:58:09.717518 | orchestrator | Monday 13 April 2026 00:50:25 +0000 (0:00:00.309) 0:04:02.275 ********** 2026-04-13 00:58:09.717523 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:58:09.717528 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:58:09.717532 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:58:09.717537 | orchestrator | 2026-04-13 00:58:09.717542 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-13 00:58:09.717547 | orchestrator | Monday 13 April 2026 00:50:26 +0000 (0:00:01.278) 0:04:03.554 ********** 2026-04-13 00:58:09.717551 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-13 00:58:09.717556 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-13 00:58:09.717561 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-13 00:58:09.717566 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.717571 | orchestrator | 2026-04-13 00:58:09.717576 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-13 00:58:09.717580 | orchestrator | Monday 13 April 2026 00:50:27 +0000 (0:00:00.879) 0:04:04.433 ********** 2026-04-13 00:58:09.717585 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.717590 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.717595 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.717600 | orchestrator | 2026-04-13 00:58:09.717604 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-13 00:58:09.717609 | orchestrator | Monday 13 April 2026 00:50:28 +0000 (0:00:00.356) 0:04:04.790 ********** 2026-04-13 00:58:09.717618 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.717623 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.717628 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.717633 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.717638 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.717656 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.717662 | orchestrator | 2026-04-13 00:58:09.717667 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-13 00:58:09.717672 | orchestrator | Monday 13 April 2026 00:50:28 +0000 (0:00:00.778) 0:04:05.568 ********** 2026-04-13 00:58:09.717676 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.717681 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.717686 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.717691 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:58:09.717695 | orchestrator | 2026-04-13 00:58:09.717700 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-13 00:58:09.717705 | orchestrator | Monday 13 April 2026 00:50:29 +0000 (0:00:01.042) 0:04:06.611 ********** 2026-04-13 00:58:09.717710 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.717714 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.717719 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.717724 | orchestrator | 2026-04-13 00:58:09.717729 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-13 00:58:09.717733 | orchestrator | Monday 13 April 2026 00:50:30 +0000 (0:00:00.293) 0:04:06.904 ********** 2026-04-13 00:58:09.717738 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:58:09.717743 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:58:09.717748 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:58:09.717753 | orchestrator | 2026-04-13 00:58:09.717757 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-13 00:58:09.717762 | orchestrator | Monday 13 April 2026 00:50:31 +0000 (0:00:01.086) 0:04:07.991 ********** 2026-04-13 00:58:09.717767 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-13 00:58:09.717772 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-13 00:58:09.717776 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-13 00:58:09.717781 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.717786 | orchestrator | 2026-04-13 00:58:09.717791 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-13 00:58:09.717798 | orchestrator | Monday 13 April 2026 00:50:32 +0000 (0:00:00.731) 0:04:08.722 ********** 2026-04-13 00:58:09.717803 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.717808 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.717813 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.717817 | orchestrator | 2026-04-13 00:58:09.717822 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-04-13 00:58:09.717827 | orchestrator | 2026-04-13 00:58:09.717832 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-13 00:58:09.717837 | orchestrator | Monday 13 April 2026 00:50:32 +0000 (0:00:00.852) 0:04:09.574 ********** 2026-04-13 00:58:09.717841 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:58:09.717846 | orchestrator | 2026-04-13 00:58:09.717851 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-13 00:58:09.717855 | orchestrator | Monday 13 April 2026 00:50:33 +0000 (0:00:00.618) 0:04:10.193 ********** 2026-04-13 00:58:09.717860 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:58:09.717865 | orchestrator | 2026-04-13 00:58:09.717870 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-13 00:58:09.717874 | orchestrator | Monday 13 April 2026 00:50:34 +0000 (0:00:00.983) 0:04:11.176 ********** 2026-04-13 00:58:09.717886 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.717891 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.717896 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.717901 | orchestrator | 2026-04-13 00:58:09.717905 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-13 00:58:09.717910 | orchestrator | Monday 13 April 2026 00:50:35 +0000 (0:00:00.894) 0:04:12.071 ********** 2026-04-13 00:58:09.717915 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.717920 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.717925 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.717929 | orchestrator | 2026-04-13 00:58:09.717934 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-13 00:58:09.717939 | orchestrator | Monday 13 April 2026 00:50:35 +0000 (0:00:00.328) 0:04:12.399 ********** 2026-04-13 00:58:09.717944 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.717948 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.717953 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.717958 | orchestrator | 2026-04-13 00:58:09.717963 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-13 00:58:09.717967 | orchestrator | Monday 13 April 2026 00:50:36 +0000 (0:00:00.349) 0:04:12.748 ********** 2026-04-13 00:58:09.717972 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.717977 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.717982 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.717986 | orchestrator | 2026-04-13 00:58:09.717991 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-13 00:58:09.717996 | orchestrator | Monday 13 April 2026 00:50:36 +0000 (0:00:00.344) 0:04:13.093 ********** 2026-04-13 00:58:09.718001 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.718006 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.718010 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.718036 | orchestrator | 2026-04-13 00:58:09.718043 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-13 00:58:09.718047 | orchestrator | Monday 13 April 2026 00:50:37 +0000 (0:00:01.077) 0:04:14.170 ********** 2026-04-13 00:58:09.718052 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.718057 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.718062 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.718066 | orchestrator | 2026-04-13 00:58:09.718071 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-13 00:58:09.718076 | orchestrator | Monday 13 April 2026 00:50:37 +0000 (0:00:00.317) 0:04:14.488 ********** 2026-04-13 00:58:09.718096 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.718102 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.718106 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.718111 | orchestrator | 2026-04-13 00:58:09.718116 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-13 00:58:09.718121 | orchestrator | Monday 13 April 2026 00:50:38 +0000 (0:00:00.316) 0:04:14.804 ********** 2026-04-13 00:58:09.718125 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.718130 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.718135 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.718140 | orchestrator | 2026-04-13 00:58:09.718144 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-13 00:58:09.718149 | orchestrator | Monday 13 April 2026 00:50:38 +0000 (0:00:00.808) 0:04:15.612 ********** 2026-04-13 00:58:09.718154 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.718159 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.718164 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.718172 | orchestrator | 2026-04-13 00:58:09.718179 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-13 00:58:09.718187 | orchestrator | Monday 13 April 2026 00:50:40 +0000 (0:00:01.462) 0:04:17.075 ********** 2026-04-13 00:58:09.718195 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.718208 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.718215 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.718223 | orchestrator | 2026-04-13 00:58:09.718230 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-13 00:58:09.718238 | orchestrator | Monday 13 April 2026 00:50:40 +0000 (0:00:00.420) 0:04:17.495 ********** 2026-04-13 00:58:09.718245 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.718252 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.718258 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.718265 | orchestrator | 2026-04-13 00:58:09.718273 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-13 00:58:09.718281 | orchestrator | Monday 13 April 2026 00:50:41 +0000 (0:00:00.616) 0:04:18.112 ********** 2026-04-13 00:58:09.718288 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.718295 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.718303 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.718310 | orchestrator | 2026-04-13 00:58:09.718317 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-13 00:58:09.718329 | orchestrator | Monday 13 April 2026 00:50:41 +0000 (0:00:00.418) 0:04:18.530 ********** 2026-04-13 00:58:09.718372 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.718378 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.718383 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.718387 | orchestrator | 2026-04-13 00:58:09.718392 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-13 00:58:09.718397 | orchestrator | Monday 13 April 2026 00:50:42 +0000 (0:00:00.645) 0:04:19.176 ********** 2026-04-13 00:58:09.718402 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.718407 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.718411 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.718416 | orchestrator | 2026-04-13 00:58:09.718421 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-13 00:58:09.718426 | orchestrator | Monday 13 April 2026 00:50:42 +0000 (0:00:00.350) 0:04:19.526 ********** 2026-04-13 00:58:09.718430 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.718435 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.718440 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.718445 | orchestrator | 2026-04-13 00:58:09.718449 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-13 00:58:09.718454 | orchestrator | Monday 13 April 2026 00:50:43 +0000 (0:00:00.357) 0:04:19.884 ********** 2026-04-13 00:58:09.718459 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.718464 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.718469 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.718473 | orchestrator | 2026-04-13 00:58:09.718478 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-13 00:58:09.718483 | orchestrator | Monday 13 April 2026 00:50:43 +0000 (0:00:00.387) 0:04:20.272 ********** 2026-04-13 00:58:09.718488 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.718493 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.718497 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.718502 | orchestrator | 2026-04-13 00:58:09.718507 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-13 00:58:09.718512 | orchestrator | Monday 13 April 2026 00:50:44 +0000 (0:00:00.400) 0:04:20.673 ********** 2026-04-13 00:58:09.718516 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.718521 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.718526 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.718531 | orchestrator | 2026-04-13 00:58:09.718536 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-13 00:58:09.718541 | orchestrator | Monday 13 April 2026 00:50:44 +0000 (0:00:00.708) 0:04:21.381 ********** 2026-04-13 00:58:09.718545 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.718550 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.718555 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.718565 | orchestrator | 2026-04-13 00:58:09.718569 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-13 00:58:09.718574 | orchestrator | Monday 13 April 2026 00:50:45 +0000 (0:00:00.588) 0:04:21.970 ********** 2026-04-13 00:58:09.718579 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.718584 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.718588 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.718593 | orchestrator | 2026-04-13 00:58:09.718598 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-13 00:58:09.718603 | orchestrator | Monday 13 April 2026 00:50:45 +0000 (0:00:00.328) 0:04:22.299 ********** 2026-04-13 00:58:09.718608 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:58:09.718612 | orchestrator | 2026-04-13 00:58:09.718617 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-13 00:58:09.718622 | orchestrator | Monday 13 April 2026 00:50:46 +0000 (0:00:00.916) 0:04:23.216 ********** 2026-04-13 00:58:09.718627 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.718632 | orchestrator | 2026-04-13 00:58:09.718659 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-13 00:58:09.718664 | orchestrator | Monday 13 April 2026 00:50:46 +0000 (0:00:00.167) 0:04:23.384 ********** 2026-04-13 00:58:09.718669 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-13 00:58:09.718674 | orchestrator | 2026-04-13 00:58:09.718679 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-13 00:58:09.718683 | orchestrator | Monday 13 April 2026 00:50:47 +0000 (0:00:01.172) 0:04:24.556 ********** 2026-04-13 00:58:09.718688 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.718693 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.718698 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.718702 | orchestrator | 2026-04-13 00:58:09.718707 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-13 00:58:09.718712 | orchestrator | Monday 13 April 2026 00:50:48 +0000 (0:00:00.326) 0:04:24.883 ********** 2026-04-13 00:58:09.718717 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.718721 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.718726 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.718731 | orchestrator | 2026-04-13 00:58:09.718736 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-13 00:58:09.718740 | orchestrator | Monday 13 April 2026 00:50:48 +0000 (0:00:00.339) 0:04:25.222 ********** 2026-04-13 00:58:09.718745 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:58:09.718750 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:58:09.718755 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:58:09.718759 | orchestrator | 2026-04-13 00:58:09.718764 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-13 00:58:09.718769 | orchestrator | Monday 13 April 2026 00:50:49 +0000 (0:00:01.315) 0:04:26.537 ********** 2026-04-13 00:58:09.718774 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:58:09.718779 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:58:09.718783 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:58:09.718788 | orchestrator | 2026-04-13 00:58:09.718793 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-13 00:58:09.718798 | orchestrator | Monday 13 April 2026 00:50:50 +0000 (0:00:00.761) 0:04:27.298 ********** 2026-04-13 00:58:09.718803 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:58:09.718807 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:58:09.718815 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:58:09.718820 | orchestrator | 2026-04-13 00:58:09.718825 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-13 00:58:09.718829 | orchestrator | Monday 13 April 2026 00:50:51 +0000 (0:00:00.765) 0:04:28.064 ********** 2026-04-13 00:58:09.718834 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.718839 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.718848 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.718852 | orchestrator | 2026-04-13 00:58:09.718857 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-13 00:58:09.718862 | orchestrator | Monday 13 April 2026 00:50:52 +0000 (0:00:00.742) 0:04:28.807 ********** 2026-04-13 00:58:09.718867 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:58:09.718872 | orchestrator | 2026-04-13 00:58:09.718876 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-13 00:58:09.718881 | orchestrator | Monday 13 April 2026 00:50:53 +0000 (0:00:01.647) 0:04:30.455 ********** 2026-04-13 00:58:09.718886 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.718891 | orchestrator | 2026-04-13 00:58:09.718896 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-13 00:58:09.718900 | orchestrator | Monday 13 April 2026 00:50:55 +0000 (0:00:01.516) 0:04:31.972 ********** 2026-04-13 00:58:09.718905 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-13 00:58:09.718909 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:58:09.718914 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:58:09.718918 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-13 00:58:09.718923 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-04-13 00:58:09.718927 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-13 00:58:09.718932 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-13 00:58:09.718936 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-04-13 00:58:09.718941 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-13 00:58:09.718945 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-04-13 00:58:09.718950 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-13 00:58:09.718955 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-04-13 00:58:09.718959 | orchestrator | 2026-04-13 00:58:09.718964 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-13 00:58:09.718968 | orchestrator | Monday 13 April 2026 00:50:58 +0000 (0:00:03.162) 0:04:35.134 ********** 2026-04-13 00:58:09.718973 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:58:09.718977 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:58:09.718982 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:58:09.718986 | orchestrator | 2026-04-13 00:58:09.718991 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-13 00:58:09.718995 | orchestrator | Monday 13 April 2026 00:51:00 +0000 (0:00:01.538) 0:04:36.673 ********** 2026-04-13 00:58:09.719000 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.719004 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.719009 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.719013 | orchestrator | 2026-04-13 00:58:09.719018 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-13 00:58:09.719022 | orchestrator | Monday 13 April 2026 00:51:00 +0000 (0:00:00.406) 0:04:37.079 ********** 2026-04-13 00:58:09.719027 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.719031 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.719036 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.719040 | orchestrator | 2026-04-13 00:58:09.719045 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-13 00:58:09.719049 | orchestrator | Monday 13 April 2026 00:51:00 +0000 (0:00:00.330) 0:04:37.410 ********** 2026-04-13 00:58:09.719054 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:58:09.719072 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:58:09.719077 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:58:09.719082 | orchestrator | 2026-04-13 00:58:09.719086 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-13 00:58:09.719091 | orchestrator | Monday 13 April 2026 00:51:02 +0000 (0:00:02.150) 0:04:39.560 ********** 2026-04-13 00:58:09.719095 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:58:09.719104 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:58:09.719108 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:58:09.719113 | orchestrator | 2026-04-13 00:58:09.719117 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-13 00:58:09.719122 | orchestrator | Monday 13 April 2026 00:51:04 +0000 (0:00:01.237) 0:04:40.798 ********** 2026-04-13 00:58:09.719126 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.719131 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.719135 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.719140 | orchestrator | 2026-04-13 00:58:09.719144 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-13 00:58:09.719149 | orchestrator | Monday 13 April 2026 00:51:04 +0000 (0:00:00.293) 0:04:41.092 ********** 2026-04-13 00:58:09.719153 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-2, testbed-node-1 2026-04-13 00:58:09.719158 | orchestrator | 2026-04-13 00:58:09.719162 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-13 00:58:09.719167 | orchestrator | Monday 13 April 2026 00:51:05 +0000 (0:00:00.974) 0:04:42.067 ********** 2026-04-13 00:58:09.719171 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.719176 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.719180 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.719185 | orchestrator | 2026-04-13 00:58:09.719189 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-13 00:58:09.719194 | orchestrator | Monday 13 April 2026 00:51:05 +0000 (0:00:00.324) 0:04:42.391 ********** 2026-04-13 00:58:09.719198 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.719203 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.719207 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.719212 | orchestrator | 2026-04-13 00:58:09.719219 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-13 00:58:09.719224 | orchestrator | Monday 13 April 2026 00:51:05 +0000 (0:00:00.259) 0:04:42.650 ********** 2026-04-13 00:58:09.719228 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:58:09.719233 | orchestrator | 2026-04-13 00:58:09.719237 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-13 00:58:09.719242 | orchestrator | Monday 13 April 2026 00:51:06 +0000 (0:00:00.634) 0:04:43.285 ********** 2026-04-13 00:58:09.719246 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:58:09.719251 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:58:09.719255 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:58:09.719260 | orchestrator | 2026-04-13 00:58:09.719264 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-13 00:58:09.719269 | orchestrator | Monday 13 April 2026 00:51:08 +0000 (0:00:02.242) 0:04:45.527 ********** 2026-04-13 00:58:09.719273 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:58:09.719278 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:58:09.719282 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:58:09.719287 | orchestrator | 2026-04-13 00:58:09.719291 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-13 00:58:09.719296 | orchestrator | Monday 13 April 2026 00:51:09 +0000 (0:00:01.138) 0:04:46.665 ********** 2026-04-13 00:58:09.719300 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:58:09.719305 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:58:09.719309 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:58:09.719314 | orchestrator | 2026-04-13 00:58:09.719318 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-13 00:58:09.719323 | orchestrator | Monday 13 April 2026 00:51:11 +0000 (0:00:01.742) 0:04:48.408 ********** 2026-04-13 00:58:09.719327 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:58:09.719348 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:58:09.719353 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:58:09.719361 | orchestrator | 2026-04-13 00:58:09.719365 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-13 00:58:09.719370 | orchestrator | Monday 13 April 2026 00:51:13 +0000 (0:00:01.894) 0:04:50.302 ********** 2026-04-13 00:58:09.719374 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:58:09.719379 | orchestrator | 2026-04-13 00:58:09.719384 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-13 00:58:09.719388 | orchestrator | Monday 13 April 2026 00:51:14 +0000 (0:00:00.901) 0:04:51.203 ********** 2026-04-13 00:58:09.719393 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-04-13 00:58:09.719397 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.719402 | orchestrator | 2026-04-13 00:58:09.719407 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-13 00:58:09.719411 | orchestrator | Monday 13 April 2026 00:51:36 +0000 (0:00:21.878) 0:05:13.082 ********** 2026-04-13 00:58:09.719416 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.719420 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.719425 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.719429 | orchestrator | 2026-04-13 00:58:09.719434 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-13 00:58:09.719438 | orchestrator | Monday 13 April 2026 00:51:45 +0000 (0:00:09.347) 0:05:22.429 ********** 2026-04-13 00:58:09.719443 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.719447 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.719452 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.719456 | orchestrator | 2026-04-13 00:58:09.719461 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-13 00:58:09.719479 | orchestrator | Monday 13 April 2026 00:51:46 +0000 (0:00:00.560) 0:05:22.990 ********** 2026-04-13 00:58:09.719486 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a0df229d52359229a89bc3797058c6a6f91b354d'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-13 00:58:09.719493 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a0df229d52359229a89bc3797058c6a6f91b354d'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-13 00:58:09.719499 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a0df229d52359229a89bc3797058c6a6f91b354d'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-13 00:58:09.719507 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a0df229d52359229a89bc3797058c6a6f91b354d'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-13 00:58:09.719512 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a0df229d52359229a89bc3797058c6a6f91b354d'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-13 00:58:09.719517 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a0df229d52359229a89bc3797058c6a6f91b354d'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__a0df229d52359229a89bc3797058c6a6f91b354d'}])  2026-04-13 00:58:09.719526 | orchestrator | 2026-04-13 00:58:09.719531 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-13 00:58:09.719536 | orchestrator | Monday 13 April 2026 00:52:01 +0000 (0:00:15.035) 0:05:38.025 ********** 2026-04-13 00:58:09.719540 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.719545 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.719549 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.719554 | orchestrator | 2026-04-13 00:58:09.719558 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-13 00:58:09.719563 | orchestrator | Monday 13 April 2026 00:52:01 +0000 (0:00:00.332) 0:05:38.358 ********** 2026-04-13 00:58:09.719567 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:58:09.719572 | orchestrator | 2026-04-13 00:58:09.719577 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-13 00:58:09.719581 | orchestrator | Monday 13 April 2026 00:52:02 +0000 (0:00:00.772) 0:05:39.130 ********** 2026-04-13 00:58:09.719586 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.719590 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.719595 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.719599 | orchestrator | 2026-04-13 00:58:09.719604 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-13 00:58:09.719609 | orchestrator | Monday 13 April 2026 00:52:02 +0000 (0:00:00.318) 0:05:39.449 ********** 2026-04-13 00:58:09.719613 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.719618 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.719622 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.719627 | orchestrator | 2026-04-13 00:58:09.719631 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-13 00:58:09.719636 | orchestrator | Monday 13 April 2026 00:52:03 +0000 (0:00:00.342) 0:05:39.791 ********** 2026-04-13 00:58:09.719640 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-13 00:58:09.719645 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-13 00:58:09.719649 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-13 00:58:09.719654 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.719658 | orchestrator | 2026-04-13 00:58:09.719663 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-13 00:58:09.719667 | orchestrator | Monday 13 April 2026 00:52:03 +0000 (0:00:00.839) 0:05:40.631 ********** 2026-04-13 00:58:09.719672 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.719676 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.719694 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.719699 | orchestrator | 2026-04-13 00:58:09.719704 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-04-13 00:58:09.719708 | orchestrator | 2026-04-13 00:58:09.719713 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-13 00:58:09.719717 | orchestrator | Monday 13 April 2026 00:52:04 +0000 (0:00:00.862) 0:05:41.493 ********** 2026-04-13 00:58:09.719722 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:58:09.719727 | orchestrator | 2026-04-13 00:58:09.719731 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-13 00:58:09.719736 | orchestrator | Monday 13 April 2026 00:52:05 +0000 (0:00:00.532) 0:05:42.025 ********** 2026-04-13 00:58:09.719740 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:58:09.719748 | orchestrator | 2026-04-13 00:58:09.719752 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-13 00:58:09.719757 | orchestrator | Monday 13 April 2026 00:52:06 +0000 (0:00:00.778) 0:05:42.804 ********** 2026-04-13 00:58:09.719761 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.719766 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.719770 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.719775 | orchestrator | 2026-04-13 00:58:09.719779 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-13 00:58:09.719784 | orchestrator | Monday 13 April 2026 00:52:07 +0000 (0:00:01.670) 0:05:44.474 ********** 2026-04-13 00:58:09.719788 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.719793 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.719797 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.719802 | orchestrator | 2026-04-13 00:58:09.719806 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-13 00:58:09.719811 | orchestrator | Monday 13 April 2026 00:52:08 +0000 (0:00:00.326) 0:05:44.801 ********** 2026-04-13 00:58:09.719815 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.719820 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.719827 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.719831 | orchestrator | 2026-04-13 00:58:09.719836 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-13 00:58:09.719840 | orchestrator | Monday 13 April 2026 00:52:08 +0000 (0:00:00.373) 0:05:45.175 ********** 2026-04-13 00:58:09.719845 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.719849 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.719854 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.719858 | orchestrator | 2026-04-13 00:58:09.719863 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-13 00:58:09.719867 | orchestrator | Monday 13 April 2026 00:52:09 +0000 (0:00:00.617) 0:05:45.792 ********** 2026-04-13 00:58:09.719872 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.719876 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.719881 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.719885 | orchestrator | 2026-04-13 00:58:09.719890 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-13 00:58:09.719894 | orchestrator | Monday 13 April 2026 00:52:09 +0000 (0:00:00.752) 0:05:46.545 ********** 2026-04-13 00:58:09.719899 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.719903 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.719908 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.719912 | orchestrator | 2026-04-13 00:58:09.719917 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-13 00:58:09.719921 | orchestrator | Monday 13 April 2026 00:52:10 +0000 (0:00:00.306) 0:05:46.851 ********** 2026-04-13 00:58:09.719926 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.719930 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.719935 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.719939 | orchestrator | 2026-04-13 00:58:09.719944 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-13 00:58:09.719948 | orchestrator | Monday 13 April 2026 00:52:10 +0000 (0:00:00.318) 0:05:47.170 ********** 2026-04-13 00:58:09.719953 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.719957 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.719962 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.719966 | orchestrator | 2026-04-13 00:58:09.719971 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-13 00:58:09.719975 | orchestrator | Monday 13 April 2026 00:52:11 +0000 (0:00:00.981) 0:05:48.151 ********** 2026-04-13 00:58:09.719980 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.719984 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.719989 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.719993 | orchestrator | 2026-04-13 00:58:09.720001 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-13 00:58:09.720005 | orchestrator | Monday 13 April 2026 00:52:12 +0000 (0:00:00.933) 0:05:49.085 ********** 2026-04-13 00:58:09.720010 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.720015 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.720019 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.720024 | orchestrator | 2026-04-13 00:58:09.720028 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-13 00:58:09.720033 | orchestrator | Monday 13 April 2026 00:52:12 +0000 (0:00:00.289) 0:05:49.374 ********** 2026-04-13 00:58:09.720037 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.720042 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.720046 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.720051 | orchestrator | 2026-04-13 00:58:09.720055 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-13 00:58:09.720060 | orchestrator | Monday 13 April 2026 00:52:13 +0000 (0:00:00.325) 0:05:49.700 ********** 2026-04-13 00:58:09.720064 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.720069 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.720073 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.720078 | orchestrator | 2026-04-13 00:58:09.720082 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-13 00:58:09.720100 | orchestrator | Monday 13 April 2026 00:52:13 +0000 (0:00:00.364) 0:05:50.065 ********** 2026-04-13 00:58:09.720105 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.720110 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.720114 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.720119 | orchestrator | 2026-04-13 00:58:09.720123 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-13 00:58:09.720128 | orchestrator | Monday 13 April 2026 00:52:14 +0000 (0:00:00.737) 0:05:50.802 ********** 2026-04-13 00:58:09.720132 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.720137 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.720141 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.720145 | orchestrator | 2026-04-13 00:58:09.720150 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-13 00:58:09.720154 | orchestrator | Monday 13 April 2026 00:52:14 +0000 (0:00:00.332) 0:05:51.134 ********** 2026-04-13 00:58:09.720159 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.720163 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.720168 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.720172 | orchestrator | 2026-04-13 00:58:09.720177 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-13 00:58:09.720181 | orchestrator | Monday 13 April 2026 00:52:14 +0000 (0:00:00.283) 0:05:51.418 ********** 2026-04-13 00:58:09.720186 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.720190 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.720195 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.720199 | orchestrator | 2026-04-13 00:58:09.720204 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-13 00:58:09.720208 | orchestrator | Monday 13 April 2026 00:52:15 +0000 (0:00:00.303) 0:05:51.721 ********** 2026-04-13 00:58:09.720213 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.720217 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.720222 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.720226 | orchestrator | 2026-04-13 00:58:09.720231 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-13 00:58:09.720235 | orchestrator | Monday 13 April 2026 00:52:15 +0000 (0:00:00.632) 0:05:52.354 ********** 2026-04-13 00:58:09.720240 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.720244 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.720249 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.720253 | orchestrator | 2026-04-13 00:58:09.720261 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-13 00:58:09.720271 | orchestrator | Monday 13 April 2026 00:52:16 +0000 (0:00:00.383) 0:05:52.737 ********** 2026-04-13 00:58:09.720275 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.720280 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.720284 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.720292 | orchestrator | 2026-04-13 00:58:09.720299 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-13 00:58:09.720307 | orchestrator | Monday 13 April 2026 00:52:16 +0000 (0:00:00.587) 0:05:53.324 ********** 2026-04-13 00:58:09.720314 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-13 00:58:09.720322 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-13 00:58:09.720329 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-13 00:58:09.720351 | orchestrator | 2026-04-13 00:58:09.720358 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-13 00:58:09.720365 | orchestrator | Monday 13 April 2026 00:52:17 +0000 (0:00:00.912) 0:05:54.237 ********** 2026-04-13 00:58:09.720373 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:58:09.720381 | orchestrator | 2026-04-13 00:58:09.720387 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-13 00:58:09.720392 | orchestrator | Monday 13 April 2026 00:52:18 +0000 (0:00:00.900) 0:05:55.137 ********** 2026-04-13 00:58:09.720397 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:58:09.720401 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:58:09.720406 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:58:09.720410 | orchestrator | 2026-04-13 00:58:09.720415 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-13 00:58:09.720419 | orchestrator | Monday 13 April 2026 00:52:19 +0000 (0:00:00.736) 0:05:55.874 ********** 2026-04-13 00:58:09.720423 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.720428 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.720432 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.720437 | orchestrator | 2026-04-13 00:58:09.720441 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-13 00:58:09.720446 | orchestrator | Monday 13 April 2026 00:52:19 +0000 (0:00:00.339) 0:05:56.214 ********** 2026-04-13 00:58:09.720450 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-13 00:58:09.720455 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-13 00:58:09.720460 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-13 00:58:09.720464 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-04-13 00:58:09.720469 | orchestrator | 2026-04-13 00:58:09.720473 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-13 00:58:09.720478 | orchestrator | Monday 13 April 2026 00:52:30 +0000 (0:00:10.812) 0:06:07.026 ********** 2026-04-13 00:58:09.720482 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.720487 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.720491 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.720496 | orchestrator | 2026-04-13 00:58:09.720500 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-13 00:58:09.720505 | orchestrator | Monday 13 April 2026 00:52:31 +0000 (0:00:00.833) 0:06:07.859 ********** 2026-04-13 00:58:09.720509 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-13 00:58:09.720514 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-13 00:58:09.720518 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-13 00:58:09.720523 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-13 00:58:09.720528 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:58:09.720549 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:58:09.720554 | orchestrator | 2026-04-13 00:58:09.720559 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-13 00:58:09.720567 | orchestrator | Monday 13 April 2026 00:52:33 +0000 (0:00:02.402) 0:06:10.262 ********** 2026-04-13 00:58:09.720572 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-13 00:58:09.720577 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-13 00:58:09.720581 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-13 00:58:09.720586 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-13 00:58:09.720590 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-13 00:58:09.720595 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-13 00:58:09.720599 | orchestrator | 2026-04-13 00:58:09.720604 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-13 00:58:09.720608 | orchestrator | Monday 13 April 2026 00:52:34 +0000 (0:00:01.324) 0:06:11.587 ********** 2026-04-13 00:58:09.720613 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.720617 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.720623 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.720631 | orchestrator | 2026-04-13 00:58:09.720639 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-13 00:58:09.720647 | orchestrator | Monday 13 April 2026 00:52:35 +0000 (0:00:00.737) 0:06:12.324 ********** 2026-04-13 00:58:09.720655 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.720660 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.720665 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.720669 | orchestrator | 2026-04-13 00:58:09.720674 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-13 00:58:09.720678 | orchestrator | Monday 13 April 2026 00:52:36 +0000 (0:00:00.594) 0:06:12.918 ********** 2026-04-13 00:58:09.720683 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.720687 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.720692 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.720696 | orchestrator | 2026-04-13 00:58:09.720701 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-13 00:58:09.720705 | orchestrator | Monday 13 April 2026 00:52:36 +0000 (0:00:00.315) 0:06:13.234 ********** 2026-04-13 00:58:09.720713 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:58:09.720718 | orchestrator | 2026-04-13 00:58:09.720722 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-13 00:58:09.720727 | orchestrator | Monday 13 April 2026 00:52:37 +0000 (0:00:00.526) 0:06:13.761 ********** 2026-04-13 00:58:09.720731 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.720736 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.720740 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.720745 | orchestrator | 2026-04-13 00:58:09.720749 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-13 00:58:09.720754 | orchestrator | Monday 13 April 2026 00:52:37 +0000 (0:00:00.589) 0:06:14.350 ********** 2026-04-13 00:58:09.720758 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.720763 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.720767 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.720771 | orchestrator | 2026-04-13 00:58:09.720776 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-13 00:58:09.720780 | orchestrator | Monday 13 April 2026 00:52:38 +0000 (0:00:00.379) 0:06:14.729 ********** 2026-04-13 00:58:09.720785 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:58:09.720789 | orchestrator | 2026-04-13 00:58:09.720794 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-13 00:58:09.720798 | orchestrator | Monday 13 April 2026 00:52:38 +0000 (0:00:00.627) 0:06:15.357 ********** 2026-04-13 00:58:09.720803 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:58:09.720807 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:58:09.720811 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:58:09.720820 | orchestrator | 2026-04-13 00:58:09.720824 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-13 00:58:09.720829 | orchestrator | Monday 13 April 2026 00:52:40 +0000 (0:00:01.395) 0:06:16.753 ********** 2026-04-13 00:58:09.720833 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:58:09.720838 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:58:09.720842 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:58:09.720847 | orchestrator | 2026-04-13 00:58:09.720851 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-13 00:58:09.720856 | orchestrator | Monday 13 April 2026 00:52:41 +0000 (0:00:01.590) 0:06:18.343 ********** 2026-04-13 00:58:09.720860 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:58:09.720865 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:58:09.720869 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:58:09.720874 | orchestrator | 2026-04-13 00:58:09.720878 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-13 00:58:09.720882 | orchestrator | Monday 13 April 2026 00:52:43 +0000 (0:00:01.660) 0:06:20.004 ********** 2026-04-13 00:58:09.720887 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:58:09.720892 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:58:09.720896 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:58:09.720900 | orchestrator | 2026-04-13 00:58:09.720905 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-13 00:58:09.720909 | orchestrator | Monday 13 April 2026 00:52:45 +0000 (0:00:02.612) 0:06:22.617 ********** 2026-04-13 00:58:09.720914 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.720918 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.720923 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-04-13 00:58:09.720927 | orchestrator | 2026-04-13 00:58:09.720932 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-04-13 00:58:09.720936 | orchestrator | Monday 13 April 2026 00:52:46 +0000 (0:00:00.385) 0:06:23.002 ********** 2026-04-13 00:58:09.720955 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-04-13 00:58:09.720961 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-04-13 00:58:09.720966 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-04-13 00:58:09.720970 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-04-13 00:58:09.720975 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-04-13 00:58:09.720979 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-13 00:58:09.720984 | orchestrator | 2026-04-13 00:58:09.720988 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-04-13 00:58:09.720993 | orchestrator | Monday 13 April 2026 00:53:17 +0000 (0:00:30.955) 0:06:53.958 ********** 2026-04-13 00:58:09.720997 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-13 00:58:09.721002 | orchestrator | 2026-04-13 00:58:09.721006 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-04-13 00:58:09.721011 | orchestrator | Monday 13 April 2026 00:53:18 +0000 (0:00:01.319) 0:06:55.278 ********** 2026-04-13 00:58:09.721016 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.721020 | orchestrator | 2026-04-13 00:58:09.721025 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-04-13 00:58:09.721029 | orchestrator | Monday 13 April 2026 00:53:18 +0000 (0:00:00.348) 0:06:55.626 ********** 2026-04-13 00:58:09.721034 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.721038 | orchestrator | 2026-04-13 00:58:09.721043 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-04-13 00:58:09.721047 | orchestrator | Monday 13 April 2026 00:53:19 +0000 (0:00:00.150) 0:06:55.777 ********** 2026-04-13 00:58:09.721055 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-04-13 00:58:09.721062 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-04-13 00:58:09.721067 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-04-13 00:58:09.721072 | orchestrator | 2026-04-13 00:58:09.721076 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-04-13 00:58:09.721081 | orchestrator | Monday 13 April 2026 00:53:25 +0000 (0:00:06.484) 0:07:02.262 ********** 2026-04-13 00:58:09.721085 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-04-13 00:58:09.721090 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-04-13 00:58:09.721094 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-04-13 00:58:09.721099 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-04-13 00:58:09.721104 | orchestrator | 2026-04-13 00:58:09.721108 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-13 00:58:09.721113 | orchestrator | Monday 13 April 2026 00:53:30 +0000 (0:00:04.800) 0:07:07.062 ********** 2026-04-13 00:58:09.721117 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:58:09.721122 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:58:09.721126 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:58:09.721131 | orchestrator | 2026-04-13 00:58:09.721135 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-13 00:58:09.721140 | orchestrator | Monday 13 April 2026 00:53:31 +0000 (0:00:00.996) 0:07:08.058 ********** 2026-04-13 00:58:09.721144 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:58:09.721149 | orchestrator | 2026-04-13 00:58:09.721153 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-13 00:58:09.721158 | orchestrator | Monday 13 April 2026 00:53:31 +0000 (0:00:00.565) 0:07:08.623 ********** 2026-04-13 00:58:09.721162 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.721167 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.721172 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.721176 | orchestrator | 2026-04-13 00:58:09.721181 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-13 00:58:09.721185 | orchestrator | Monday 13 April 2026 00:53:32 +0000 (0:00:00.331) 0:07:08.955 ********** 2026-04-13 00:58:09.721190 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:58:09.721194 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:58:09.721199 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:58:09.721203 | orchestrator | 2026-04-13 00:58:09.721208 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-13 00:58:09.721212 | orchestrator | Monday 13 April 2026 00:53:34 +0000 (0:00:01.730) 0:07:10.685 ********** 2026-04-13 00:58:09.721217 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-13 00:58:09.721221 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-13 00:58:09.721226 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-13 00:58:09.721230 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.721235 | orchestrator | 2026-04-13 00:58:09.721239 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-13 00:58:09.721244 | orchestrator | Monday 13 April 2026 00:53:35 +0000 (0:00:01.152) 0:07:11.838 ********** 2026-04-13 00:58:09.721248 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.721253 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.721257 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.721262 | orchestrator | 2026-04-13 00:58:09.721266 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-04-13 00:58:09.721271 | orchestrator | 2026-04-13 00:58:09.721275 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-13 00:58:09.721280 | orchestrator | Monday 13 April 2026 00:53:35 +0000 (0:00:00.694) 0:07:12.532 ********** 2026-04-13 00:58:09.721303 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:58:09.721309 | orchestrator | 2026-04-13 00:58:09.721313 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-13 00:58:09.721318 | orchestrator | Monday 13 April 2026 00:53:36 +0000 (0:00:00.738) 0:07:13.270 ********** 2026-04-13 00:58:09.721322 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:58:09.721327 | orchestrator | 2026-04-13 00:58:09.721361 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-13 00:58:09.721368 | orchestrator | Monday 13 April 2026 00:53:37 +0000 (0:00:00.445) 0:07:13.716 ********** 2026-04-13 00:58:09.721372 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.721377 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.721382 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.721386 | orchestrator | 2026-04-13 00:58:09.721391 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-13 00:58:09.721395 | orchestrator | Monday 13 April 2026 00:53:37 +0000 (0:00:00.260) 0:07:13.976 ********** 2026-04-13 00:58:09.721400 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.721404 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.721409 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.721413 | orchestrator | 2026-04-13 00:58:09.721418 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-13 00:58:09.721422 | orchestrator | Monday 13 April 2026 00:53:38 +0000 (0:00:00.946) 0:07:14.922 ********** 2026-04-13 00:58:09.721427 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.721431 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.721436 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.721440 | orchestrator | 2026-04-13 00:58:09.721445 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-13 00:58:09.721450 | orchestrator | Monday 13 April 2026 00:53:38 +0000 (0:00:00.672) 0:07:15.595 ********** 2026-04-13 00:58:09.721454 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.721459 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.721463 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.721468 | orchestrator | 2026-04-13 00:58:09.721475 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-13 00:58:09.721480 | orchestrator | Monday 13 April 2026 00:53:39 +0000 (0:00:00.770) 0:07:16.365 ********** 2026-04-13 00:58:09.721484 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.721489 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.721493 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.721498 | orchestrator | 2026-04-13 00:58:09.721502 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-13 00:58:09.721507 | orchestrator | Monday 13 April 2026 00:53:39 +0000 (0:00:00.258) 0:07:16.624 ********** 2026-04-13 00:58:09.721511 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.721516 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.721521 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.721525 | orchestrator | 2026-04-13 00:58:09.721530 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-13 00:58:09.721534 | orchestrator | Monday 13 April 2026 00:53:40 +0000 (0:00:00.520) 0:07:17.144 ********** 2026-04-13 00:58:09.721539 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.721543 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.721548 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.721552 | orchestrator | 2026-04-13 00:58:09.721557 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-13 00:58:09.721562 | orchestrator | Monday 13 April 2026 00:53:40 +0000 (0:00:00.275) 0:07:17.420 ********** 2026-04-13 00:58:09.721566 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.721571 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.721580 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.721585 | orchestrator | 2026-04-13 00:58:09.721589 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-13 00:58:09.721594 | orchestrator | Monday 13 April 2026 00:53:41 +0000 (0:00:00.659) 0:07:18.080 ********** 2026-04-13 00:58:09.721598 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.721603 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.721607 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.721612 | orchestrator | 2026-04-13 00:58:09.721616 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-13 00:58:09.721621 | orchestrator | Monday 13 April 2026 00:53:42 +0000 (0:00:00.715) 0:07:18.795 ********** 2026-04-13 00:58:09.721626 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.721630 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.721635 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.721639 | orchestrator | 2026-04-13 00:58:09.721644 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-13 00:58:09.721648 | orchestrator | Monday 13 April 2026 00:53:42 +0000 (0:00:00.546) 0:07:19.342 ********** 2026-04-13 00:58:09.721653 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.721657 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.721662 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.721666 | orchestrator | 2026-04-13 00:58:09.721671 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-13 00:58:09.721675 | orchestrator | Monday 13 April 2026 00:53:42 +0000 (0:00:00.298) 0:07:19.640 ********** 2026-04-13 00:58:09.721680 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.721685 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.721689 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.721694 | orchestrator | 2026-04-13 00:58:09.721698 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-13 00:58:09.721703 | orchestrator | Monday 13 April 2026 00:53:43 +0000 (0:00:00.343) 0:07:19.984 ********** 2026-04-13 00:58:09.721707 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.721712 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.721716 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.721721 | orchestrator | 2026-04-13 00:58:09.721725 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-13 00:58:09.721730 | orchestrator | Monday 13 April 2026 00:53:43 +0000 (0:00:00.340) 0:07:20.325 ********** 2026-04-13 00:58:09.721734 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.721739 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.721758 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.721764 | orchestrator | 2026-04-13 00:58:09.721768 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-13 00:58:09.721773 | orchestrator | Monday 13 April 2026 00:53:44 +0000 (0:00:00.593) 0:07:20.919 ********** 2026-04-13 00:58:09.721777 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.721782 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.721787 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.721791 | orchestrator | 2026-04-13 00:58:09.721796 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-13 00:58:09.721800 | orchestrator | Monday 13 April 2026 00:53:44 +0000 (0:00:00.285) 0:07:21.204 ********** 2026-04-13 00:58:09.721805 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.721809 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.721814 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.721818 | orchestrator | 2026-04-13 00:58:09.721823 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-13 00:58:09.721827 | orchestrator | Monday 13 April 2026 00:53:44 +0000 (0:00:00.318) 0:07:21.523 ********** 2026-04-13 00:58:09.721832 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.721836 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.721841 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.721849 | orchestrator | 2026-04-13 00:58:09.721853 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-13 00:58:09.721858 | orchestrator | Monday 13 April 2026 00:53:45 +0000 (0:00:00.331) 0:07:21.854 ********** 2026-04-13 00:58:09.721862 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.721867 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.721871 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.721876 | orchestrator | 2026-04-13 00:58:09.721881 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-13 00:58:09.721885 | orchestrator | Monday 13 April 2026 00:53:45 +0000 (0:00:00.639) 0:07:22.494 ********** 2026-04-13 00:58:09.721890 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.721894 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.721899 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.721903 | orchestrator | 2026-04-13 00:58:09.721908 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-13 00:58:09.721915 | orchestrator | Monday 13 April 2026 00:53:46 +0000 (0:00:00.547) 0:07:23.041 ********** 2026-04-13 00:58:09.721920 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.721924 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.721928 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.721932 | orchestrator | 2026-04-13 00:58:09.721936 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-13 00:58:09.721940 | orchestrator | Monday 13 April 2026 00:53:46 +0000 (0:00:00.323) 0:07:23.364 ********** 2026-04-13 00:58:09.721944 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-13 00:58:09.721949 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-13 00:58:09.721953 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-13 00:58:09.721957 | orchestrator | 2026-04-13 00:58:09.721961 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-13 00:58:09.721965 | orchestrator | Monday 13 April 2026 00:53:47 +0000 (0:00:00.933) 0:07:24.297 ********** 2026-04-13 00:58:09.721969 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:58:09.721973 | orchestrator | 2026-04-13 00:58:09.721977 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-13 00:58:09.721981 | orchestrator | Monday 13 April 2026 00:53:48 +0000 (0:00:00.803) 0:07:25.100 ********** 2026-04-13 00:58:09.721985 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.721990 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.721994 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.721998 | orchestrator | 2026-04-13 00:58:09.722002 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-13 00:58:09.722006 | orchestrator | Monday 13 April 2026 00:53:48 +0000 (0:00:00.318) 0:07:25.418 ********** 2026-04-13 00:58:09.722010 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.722031 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.722037 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.722044 | orchestrator | 2026-04-13 00:58:09.722050 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-13 00:58:09.722057 | orchestrator | Monday 13 April 2026 00:53:49 +0000 (0:00:00.294) 0:07:25.712 ********** 2026-04-13 00:58:09.722063 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.722070 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.722076 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.722083 | orchestrator | 2026-04-13 00:58:09.722089 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-13 00:58:09.722095 | orchestrator | Monday 13 April 2026 00:53:49 +0000 (0:00:00.891) 0:07:26.604 ********** 2026-04-13 00:58:09.722102 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.722108 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.722115 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.722121 | orchestrator | 2026-04-13 00:58:09.722134 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-13 00:58:09.722141 | orchestrator | Monday 13 April 2026 00:53:50 +0000 (0:00:00.361) 0:07:26.966 ********** 2026-04-13 00:58:09.722147 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-13 00:58:09.722154 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-13 00:58:09.722160 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-13 00:58:09.722165 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-13 00:58:09.722169 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-13 00:58:09.722177 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-13 00:58:09.722181 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-13 00:58:09.722185 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-13 00:58:09.722198 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-13 00:58:09.722202 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-13 00:58:09.722206 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-13 00:58:09.722210 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-13 00:58:09.722214 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-13 00:58:09.722218 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-13 00:58:09.722222 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-13 00:58:09.722227 | orchestrator | 2026-04-13 00:58:09.722231 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-13 00:58:09.722235 | orchestrator | Monday 13 April 2026 00:53:53 +0000 (0:00:03.038) 0:07:30.004 ********** 2026-04-13 00:58:09.722239 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.722243 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.722247 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.722251 | orchestrator | 2026-04-13 00:58:09.722255 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-13 00:58:09.722259 | orchestrator | Monday 13 April 2026 00:53:53 +0000 (0:00:00.349) 0:07:30.354 ********** 2026-04-13 00:58:09.722263 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:58:09.722267 | orchestrator | 2026-04-13 00:58:09.722275 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-13 00:58:09.722279 | orchestrator | Monday 13 April 2026 00:53:54 +0000 (0:00:00.790) 0:07:31.144 ********** 2026-04-13 00:58:09.722286 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-13 00:58:09.722293 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-13 00:58:09.722299 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-13 00:58:09.722309 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-04-13 00:58:09.722318 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-04-13 00:58:09.722325 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-04-13 00:58:09.722345 | orchestrator | 2026-04-13 00:58:09.722352 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-13 00:58:09.722359 | orchestrator | Monday 13 April 2026 00:53:55 +0000 (0:00:01.068) 0:07:32.213 ********** 2026-04-13 00:58:09.722365 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:58:09.722379 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-13 00:58:09.722386 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-13 00:58:09.722393 | orchestrator | 2026-04-13 00:58:09.722400 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-13 00:58:09.722407 | orchestrator | Monday 13 April 2026 00:53:57 +0000 (0:00:02.210) 0:07:34.423 ********** 2026-04-13 00:58:09.722413 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-13 00:58:09.722421 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-13 00:58:09.722425 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:58:09.722429 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-13 00:58:09.722433 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-13 00:58:09.722438 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:58:09.722445 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-13 00:58:09.722451 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-13 00:58:09.722456 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:58:09.722462 | orchestrator | 2026-04-13 00:58:09.722469 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-13 00:58:09.722477 | orchestrator | Monday 13 April 2026 00:53:58 +0000 (0:00:01.162) 0:07:35.585 ********** 2026-04-13 00:58:09.722483 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-13 00:58:09.722490 | orchestrator | 2026-04-13 00:58:09.722496 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-13 00:58:09.722502 | orchestrator | Monday 13 April 2026 00:54:01 +0000 (0:00:02.895) 0:07:38.481 ********** 2026-04-13 00:58:09.722509 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:58:09.722516 | orchestrator | 2026-04-13 00:58:09.722522 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-04-13 00:58:09.722528 | orchestrator | Monday 13 April 2026 00:54:02 +0000 (0:00:00.539) 0:07:39.020 ********** 2026-04-13 00:58:09.722535 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000', 'data_vg': 'ceph-d9f8332f-65b5-5ad5-8d64-0b4e5e7cc000'}) 2026-04-13 00:58:09.722542 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a', 'data_vg': 'ceph-9b6aa2f8-de46-5cb6-b1a4-58b08f65cf0a'}) 2026-04-13 00:58:09.722557 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-586ba51f-dba7-5dcd-8710-1804179cab86', 'data_vg': 'ceph-586ba51f-dba7-5dcd-8710-1804179cab86'}) 2026-04-13 00:58:09.722563 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7331b6c9-9d3b-5dac-8499-53ee0940f196', 'data_vg': 'ceph-7331b6c9-9d3b-5dac-8499-53ee0940f196'}) 2026-04-13 00:58:09.722570 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-100799fe-f0b8-5d68-80c9-d39d0aace7f9', 'data_vg': 'ceph-100799fe-f0b8-5d68-80c9-d39d0aace7f9'}) 2026-04-13 00:58:09.722576 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-971aa970-5a40-5da7-9620-8f2c789358d2', 'data_vg': 'ceph-971aa970-5a40-5da7-9620-8f2c789358d2'}) 2026-04-13 00:58:09.722583 | orchestrator | 2026-04-13 00:58:09.722589 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-13 00:58:09.722595 | orchestrator | Monday 13 April 2026 00:54:46 +0000 (0:00:43.933) 0:08:22.953 ********** 2026-04-13 00:58:09.722599 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.722603 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.722607 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.722611 | orchestrator | 2026-04-13 00:58:09.722615 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-13 00:58:09.722619 | orchestrator | Monday 13 April 2026 00:54:46 +0000 (0:00:00.570) 0:08:23.523 ********** 2026-04-13 00:58:09.722624 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:58:09.722632 | orchestrator | 2026-04-13 00:58:09.722636 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-13 00:58:09.722641 | orchestrator | Monday 13 April 2026 00:54:47 +0000 (0:00:00.522) 0:08:24.045 ********** 2026-04-13 00:58:09.722645 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.722649 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.722653 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.722657 | orchestrator | 2026-04-13 00:58:09.722661 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-13 00:58:09.722682 | orchestrator | Monday 13 April 2026 00:54:48 +0000 (0:00:00.653) 0:08:24.699 ********** 2026-04-13 00:58:09.722686 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.722690 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.722694 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.722698 | orchestrator | 2026-04-13 00:58:09.722702 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-13 00:58:09.722707 | orchestrator | Monday 13 April 2026 00:54:51 +0000 (0:00:03.058) 0:08:27.758 ********** 2026-04-13 00:58:09.722711 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:58:09.722715 | orchestrator | 2026-04-13 00:58:09.722719 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-13 00:58:09.722723 | orchestrator | Monday 13 April 2026 00:54:51 +0000 (0:00:00.535) 0:08:28.293 ********** 2026-04-13 00:58:09.722727 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:58:09.722731 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:58:09.722735 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:58:09.722740 | orchestrator | 2026-04-13 00:58:09.722744 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-13 00:58:09.722748 | orchestrator | Monday 13 April 2026 00:54:52 +0000 (0:00:01.224) 0:08:29.518 ********** 2026-04-13 00:58:09.722752 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:58:09.722756 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:58:09.722760 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:58:09.722764 | orchestrator | 2026-04-13 00:58:09.722768 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-13 00:58:09.722772 | orchestrator | Monday 13 April 2026 00:54:54 +0000 (0:00:01.444) 0:08:30.963 ********** 2026-04-13 00:58:09.722776 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:58:09.722783 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:58:09.722789 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:58:09.722797 | orchestrator | 2026-04-13 00:58:09.722802 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-13 00:58:09.722806 | orchestrator | Monday 13 April 2026 00:54:56 +0000 (0:00:01.940) 0:08:32.904 ********** 2026-04-13 00:58:09.722811 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.722815 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.722819 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.722823 | orchestrator | 2026-04-13 00:58:09.722827 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-13 00:58:09.722831 | orchestrator | Monday 13 April 2026 00:54:56 +0000 (0:00:00.329) 0:08:33.233 ********** 2026-04-13 00:58:09.722835 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.722839 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.722843 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.722847 | orchestrator | 2026-04-13 00:58:09.722851 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-13 00:58:09.722855 | orchestrator | Monday 13 April 2026 00:54:56 +0000 (0:00:00.334) 0:08:33.568 ********** 2026-04-13 00:58:09.722859 | orchestrator | ok: [testbed-node-3] => (item=1) 2026-04-13 00:58:09.722863 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-04-13 00:58:09.722867 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-04-13 00:58:09.722871 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-04-13 00:58:09.722876 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-13 00:58:09.722883 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-04-13 00:58:09.722887 | orchestrator | 2026-04-13 00:58:09.722891 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-13 00:58:09.722895 | orchestrator | Monday 13 April 2026 00:54:58 +0000 (0:00:01.298) 0:08:34.866 ********** 2026-04-13 00:58:09.722899 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-04-13 00:58:09.722903 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-04-13 00:58:09.722908 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-04-13 00:58:09.722912 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-04-13 00:58:09.722916 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-04-13 00:58:09.722923 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-13 00:58:09.722927 | orchestrator | 2026-04-13 00:58:09.722931 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-13 00:58:09.722935 | orchestrator | Monday 13 April 2026 00:55:00 +0000 (0:00:02.215) 0:08:37.082 ********** 2026-04-13 00:58:09.722939 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-04-13 00:58:09.722944 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-04-13 00:58:09.722948 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-04-13 00:58:09.722952 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-04-13 00:58:09.722956 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-13 00:58:09.722960 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-04-13 00:58:09.722964 | orchestrator | 2026-04-13 00:58:09.722968 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-13 00:58:09.722972 | orchestrator | Monday 13 April 2026 00:55:04 +0000 (0:00:03.815) 0:08:40.897 ********** 2026-04-13 00:58:09.722976 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.722980 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.722984 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-13 00:58:09.722988 | orchestrator | 2026-04-13 00:58:09.722993 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-13 00:58:09.722997 | orchestrator | Monday 13 April 2026 00:55:06 +0000 (0:00:02.207) 0:08:43.104 ********** 2026-04-13 00:58:09.723001 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.723005 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.723009 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-04-13 00:58:09.723013 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-13 00:58:09.723017 | orchestrator | 2026-04-13 00:58:09.723021 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-13 00:58:09.723025 | orchestrator | Monday 13 April 2026 00:55:19 +0000 (0:00:13.111) 0:08:56.216 ********** 2026-04-13 00:58:09.723029 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.723033 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.723040 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.723044 | orchestrator | 2026-04-13 00:58:09.723048 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-13 00:58:09.723052 | orchestrator | Monday 13 April 2026 00:55:20 +0000 (0:00:00.871) 0:08:57.088 ********** 2026-04-13 00:58:09.723056 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.723060 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.723064 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.723068 | orchestrator | 2026-04-13 00:58:09.723072 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-13 00:58:09.723077 | orchestrator | Monday 13 April 2026 00:55:21 +0000 (0:00:00.640) 0:08:57.729 ********** 2026-04-13 00:58:09.723081 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:58:09.723085 | orchestrator | 2026-04-13 00:58:09.723089 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-13 00:58:09.723096 | orchestrator | Monday 13 April 2026 00:55:21 +0000 (0:00:00.592) 0:08:58.322 ********** 2026-04-13 00:58:09.723101 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-13 00:58:09.723105 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-13 00:58:09.723109 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-13 00:58:09.723113 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.723117 | orchestrator | 2026-04-13 00:58:09.723121 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-13 00:58:09.723125 | orchestrator | Monday 13 April 2026 00:55:22 +0000 (0:00:00.393) 0:08:58.716 ********** 2026-04-13 00:58:09.723129 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.723133 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.723137 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.723141 | orchestrator | 2026-04-13 00:58:09.723145 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-13 00:58:09.723149 | orchestrator | Monday 13 April 2026 00:55:22 +0000 (0:00:00.332) 0:08:59.048 ********** 2026-04-13 00:58:09.723153 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.723158 | orchestrator | 2026-04-13 00:58:09.723162 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-13 00:58:09.723166 | orchestrator | Monday 13 April 2026 00:55:22 +0000 (0:00:00.240) 0:08:59.288 ********** 2026-04-13 00:58:09.723170 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.723174 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.723178 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.723182 | orchestrator | 2026-04-13 00:58:09.723186 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-13 00:58:09.723190 | orchestrator | Monday 13 April 2026 00:55:23 +0000 (0:00:00.595) 0:08:59.884 ********** 2026-04-13 00:58:09.723194 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.723198 | orchestrator | 2026-04-13 00:58:09.723202 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-13 00:58:09.723206 | orchestrator | Monday 13 April 2026 00:55:23 +0000 (0:00:00.259) 0:09:00.144 ********** 2026-04-13 00:58:09.723210 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.723214 | orchestrator | 2026-04-13 00:58:09.723219 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-13 00:58:09.723223 | orchestrator | Monday 13 April 2026 00:55:23 +0000 (0:00:00.221) 0:09:00.365 ********** 2026-04-13 00:58:09.723227 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.723231 | orchestrator | 2026-04-13 00:58:09.723235 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-13 00:58:09.723239 | orchestrator | Monday 13 April 2026 00:55:23 +0000 (0:00:00.118) 0:09:00.484 ********** 2026-04-13 00:58:09.723243 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.723247 | orchestrator | 2026-04-13 00:58:09.723254 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-13 00:58:09.723258 | orchestrator | Monday 13 April 2026 00:55:24 +0000 (0:00:00.241) 0:09:00.725 ********** 2026-04-13 00:58:09.723262 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.723266 | orchestrator | 2026-04-13 00:58:09.723270 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-13 00:58:09.723274 | orchestrator | Monday 13 April 2026 00:55:24 +0000 (0:00:00.213) 0:09:00.938 ********** 2026-04-13 00:58:09.723278 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-13 00:58:09.723282 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-13 00:58:09.723286 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-13 00:58:09.723290 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.723295 | orchestrator | 2026-04-13 00:58:09.723299 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-13 00:58:09.723303 | orchestrator | Monday 13 April 2026 00:55:24 +0000 (0:00:00.390) 0:09:01.329 ********** 2026-04-13 00:58:09.723310 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.723314 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.723318 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.723322 | orchestrator | 2026-04-13 00:58:09.723328 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-13 00:58:09.723371 | orchestrator | Monday 13 April 2026 00:55:24 +0000 (0:00:00.312) 0:09:01.641 ********** 2026-04-13 00:58:09.723379 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.723385 | orchestrator | 2026-04-13 00:58:09.723391 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-13 00:58:09.723397 | orchestrator | Monday 13 April 2026 00:55:25 +0000 (0:00:00.858) 0:09:02.500 ********** 2026-04-13 00:58:09.723404 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.723411 | orchestrator | 2026-04-13 00:58:09.723418 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-04-13 00:58:09.723424 | orchestrator | 2026-04-13 00:58:09.723431 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-13 00:58:09.723439 | orchestrator | Monday 13 April 2026 00:55:26 +0000 (0:00:00.749) 0:09:03.250 ********** 2026-04-13 00:58:09.723447 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:58:09.723452 | orchestrator | 2026-04-13 00:58:09.723456 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-13 00:58:09.723459 | orchestrator | Monday 13 April 2026 00:55:27 +0000 (0:00:01.343) 0:09:04.593 ********** 2026-04-13 00:58:09.723463 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:58:09.723467 | orchestrator | 2026-04-13 00:58:09.723471 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-13 00:58:09.723475 | orchestrator | Monday 13 April 2026 00:55:29 +0000 (0:00:01.305) 0:09:05.899 ********** 2026-04-13 00:58:09.723478 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.723482 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.723486 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.723489 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.723493 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.723497 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.723501 | orchestrator | 2026-04-13 00:58:09.723504 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-13 00:58:09.723508 | orchestrator | Monday 13 April 2026 00:55:30 +0000 (0:00:01.377) 0:09:07.276 ********** 2026-04-13 00:58:09.723512 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.723516 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.723519 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.723523 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.723527 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.723530 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.723534 | orchestrator | 2026-04-13 00:58:09.723538 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-13 00:58:09.723542 | orchestrator | Monday 13 April 2026 00:55:31 +0000 (0:00:00.704) 0:09:07.980 ********** 2026-04-13 00:58:09.723545 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.723549 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.723553 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.723556 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.723560 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.723564 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.723567 | orchestrator | 2026-04-13 00:58:09.723571 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-13 00:58:09.723575 | orchestrator | Monday 13 April 2026 00:55:32 +0000 (0:00:00.696) 0:09:08.677 ********** 2026-04-13 00:58:09.723579 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.723586 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.723590 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.723594 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.723597 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.723601 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.723605 | orchestrator | 2026-04-13 00:58:09.723609 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-13 00:58:09.723612 | orchestrator | Monday 13 April 2026 00:55:33 +0000 (0:00:01.061) 0:09:09.738 ********** 2026-04-13 00:58:09.723616 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.723620 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.723623 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.723627 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.723631 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.723635 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.723638 | orchestrator | 2026-04-13 00:58:09.723642 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-13 00:58:09.723646 | orchestrator | Monday 13 April 2026 00:55:34 +0000 (0:00:01.012) 0:09:10.751 ********** 2026-04-13 00:58:09.723650 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.723653 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.723660 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.723664 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.723668 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.723672 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.723675 | orchestrator | 2026-04-13 00:58:09.723679 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-13 00:58:09.723683 | orchestrator | Monday 13 April 2026 00:55:35 +0000 (0:00:00.931) 0:09:11.682 ********** 2026-04-13 00:58:09.723687 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.723690 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.723694 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.723698 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.723701 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.723705 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.723709 | orchestrator | 2026-04-13 00:58:09.723712 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-13 00:58:09.723716 | orchestrator | Monday 13 April 2026 00:55:35 +0000 (0:00:00.607) 0:09:12.290 ********** 2026-04-13 00:58:09.723720 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.723724 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.723727 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.723731 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.723735 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.723738 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.723742 | orchestrator | 2026-04-13 00:58:09.723746 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-13 00:58:09.723750 | orchestrator | Monday 13 April 2026 00:55:37 +0000 (0:00:01.384) 0:09:13.674 ********** 2026-04-13 00:58:09.723753 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.723757 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.723761 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.723764 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.723768 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.723772 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.723775 | orchestrator | 2026-04-13 00:58:09.723779 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-13 00:58:09.723783 | orchestrator | Monday 13 April 2026 00:55:38 +0000 (0:00:01.128) 0:09:14.803 ********** 2026-04-13 00:58:09.723786 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.723790 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.723796 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.723800 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.723803 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.723810 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.723814 | orchestrator | 2026-04-13 00:58:09.723818 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-13 00:58:09.723821 | orchestrator | Monday 13 April 2026 00:55:39 +0000 (0:00:01.046) 0:09:15.849 ********** 2026-04-13 00:58:09.723825 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.723829 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.723832 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.723836 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.723840 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.723843 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.723847 | orchestrator | 2026-04-13 00:58:09.723851 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-13 00:58:09.723855 | orchestrator | Monday 13 April 2026 00:55:40 +0000 (0:00:00.873) 0:09:16.722 ********** 2026-04-13 00:58:09.723858 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.723862 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.723866 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.723869 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.723873 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.723877 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.723880 | orchestrator | 2026-04-13 00:58:09.723884 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-13 00:58:09.723888 | orchestrator | Monday 13 April 2026 00:55:41 +0000 (0:00:01.077) 0:09:17.800 ********** 2026-04-13 00:58:09.723892 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.723895 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.723899 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.723903 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.723906 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.723910 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.723914 | orchestrator | 2026-04-13 00:58:09.723917 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-13 00:58:09.723921 | orchestrator | Monday 13 April 2026 00:55:41 +0000 (0:00:00.694) 0:09:18.494 ********** 2026-04-13 00:58:09.723925 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.723929 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.723932 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.723936 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.723940 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.723943 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.723947 | orchestrator | 2026-04-13 00:58:09.723951 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-13 00:58:09.723954 | orchestrator | Monday 13 April 2026 00:55:42 +0000 (0:00:00.892) 0:09:19.387 ********** 2026-04-13 00:58:09.723958 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.723962 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.723966 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.723969 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.723973 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.723977 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.723980 | orchestrator | 2026-04-13 00:58:09.723984 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-13 00:58:09.723988 | orchestrator | Monday 13 April 2026 00:55:43 +0000 (0:00:00.660) 0:09:20.047 ********** 2026-04-13 00:58:09.723991 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.723995 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.723999 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.724002 | orchestrator | skipping: [testbed-node-0] 2026-04-13 00:58:09.724006 | orchestrator | skipping: [testbed-node-1] 2026-04-13 00:58:09.724010 | orchestrator | skipping: [testbed-node-2] 2026-04-13 00:58:09.724013 | orchestrator | 2026-04-13 00:58:09.724017 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-13 00:58:09.724021 | orchestrator | Monday 13 April 2026 00:55:44 +0000 (0:00:00.873) 0:09:20.921 ********** 2026-04-13 00:58:09.724030 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.724034 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.724037 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.724041 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.724045 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.724048 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.724052 | orchestrator | 2026-04-13 00:58:09.724056 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-13 00:58:09.724060 | orchestrator | Monday 13 April 2026 00:55:44 +0000 (0:00:00.668) 0:09:21.590 ********** 2026-04-13 00:58:09.724064 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.724067 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.724071 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.724075 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.724078 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.724082 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.724086 | orchestrator | 2026-04-13 00:58:09.724089 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-13 00:58:09.724093 | orchestrator | Monday 13 April 2026 00:55:45 +0000 (0:00:01.012) 0:09:22.603 ********** 2026-04-13 00:58:09.724097 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.724101 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.724104 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.724108 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.724112 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.724115 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.724119 | orchestrator | 2026-04-13 00:58:09.724123 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-04-13 00:58:09.724126 | orchestrator | Monday 13 April 2026 00:55:47 +0000 (0:00:01.296) 0:09:23.899 ********** 2026-04-13 00:58:09.724130 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-13 00:58:09.724134 | orchestrator | 2026-04-13 00:58:09.724138 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-04-13 00:58:09.724141 | orchestrator | Monday 13 April 2026 00:55:51 +0000 (0:00:04.148) 0:09:28.048 ********** 2026-04-13 00:58:09.724145 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-13 00:58:09.724149 | orchestrator | 2026-04-13 00:58:09.724153 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-04-13 00:58:09.724159 | orchestrator | Monday 13 April 2026 00:55:53 +0000 (0:00:02.095) 0:09:30.144 ********** 2026-04-13 00:58:09.724163 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:58:09.724167 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:58:09.724170 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:58:09.724174 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.724178 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:58:09.724181 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:58:09.724185 | orchestrator | 2026-04-13 00:58:09.724189 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-04-13 00:58:09.724192 | orchestrator | Monday 13 April 2026 00:55:55 +0000 (0:00:01.708) 0:09:31.852 ********** 2026-04-13 00:58:09.724196 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:58:09.724200 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:58:09.724203 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:58:09.724207 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:58:09.724211 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:58:09.724215 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:58:09.724218 | orchestrator | 2026-04-13 00:58:09.724222 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-04-13 00:58:09.724226 | orchestrator | Monday 13 April 2026 00:55:56 +0000 (0:00:01.341) 0:09:33.193 ********** 2026-04-13 00:58:09.724232 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:58:09.724243 | orchestrator | 2026-04-13 00:58:09.724247 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-04-13 00:58:09.724251 | orchestrator | Monday 13 April 2026 00:55:57 +0000 (0:00:01.244) 0:09:34.437 ********** 2026-04-13 00:58:09.724255 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:58:09.724258 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:58:09.724262 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:58:09.724266 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:58:09.724270 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:58:09.724273 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:58:09.724277 | orchestrator | 2026-04-13 00:58:09.724281 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-04-13 00:58:09.724284 | orchestrator | Monday 13 April 2026 00:55:59 +0000 (0:00:01.499) 0:09:35.937 ********** 2026-04-13 00:58:09.724288 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:58:09.724292 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:58:09.724295 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:58:09.724299 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:58:09.724303 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:58:09.724306 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:58:09.724310 | orchestrator | 2026-04-13 00:58:09.724314 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-04-13 00:58:09.724318 | orchestrator | Monday 13 April 2026 00:56:02 +0000 (0:00:03.531) 0:09:39.468 ********** 2026-04-13 00:58:09.724321 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 00:58:09.724325 | orchestrator | 2026-04-13 00:58:09.724329 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-04-13 00:58:09.724346 | orchestrator | Monday 13 April 2026 00:56:04 +0000 (0:00:01.251) 0:09:40.719 ********** 2026-04-13 00:58:09.724351 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.724354 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.724358 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.724362 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.724365 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.724369 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.724373 | orchestrator | 2026-04-13 00:58:09.724377 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-04-13 00:58:09.724380 | orchestrator | Monday 13 April 2026 00:56:04 +0000 (0:00:00.641) 0:09:41.361 ********** 2026-04-13 00:58:09.724384 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:58:09.724390 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:58:09.724394 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:58:09.724398 | orchestrator | changed: [testbed-node-0] 2026-04-13 00:58:09.724402 | orchestrator | changed: [testbed-node-1] 2026-04-13 00:58:09.724405 | orchestrator | changed: [testbed-node-2] 2026-04-13 00:58:09.724409 | orchestrator | 2026-04-13 00:58:09.724413 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-04-13 00:58:09.724417 | orchestrator | Monday 13 April 2026 00:56:07 +0000 (0:00:02.775) 0:09:44.137 ********** 2026-04-13 00:58:09.724420 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.724424 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.724428 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.724431 | orchestrator | ok: [testbed-node-0] 2026-04-13 00:58:09.724435 | orchestrator | ok: [testbed-node-1] 2026-04-13 00:58:09.724439 | orchestrator | ok: [testbed-node-2] 2026-04-13 00:58:09.724443 | orchestrator | 2026-04-13 00:58:09.724446 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-04-13 00:58:09.724450 | orchestrator | 2026-04-13 00:58:09.724454 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-13 00:58:09.724457 | orchestrator | Monday 13 April 2026 00:56:08 +0000 (0:00:00.843) 0:09:44.981 ********** 2026-04-13 00:58:09.724461 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:58:09.724468 | orchestrator | 2026-04-13 00:58:09.724471 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-13 00:58:09.724475 | orchestrator | Monday 13 April 2026 00:56:09 +0000 (0:00:00.747) 0:09:45.728 ********** 2026-04-13 00:58:09.724479 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:58:09.724483 | orchestrator | 2026-04-13 00:58:09.724487 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-13 00:58:09.724490 | orchestrator | Monday 13 April 2026 00:56:09 +0000 (0:00:00.553) 0:09:46.282 ********** 2026-04-13 00:58:09.724494 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.724498 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.724502 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.724505 | orchestrator | 2026-04-13 00:58:09.724511 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-13 00:58:09.724515 | orchestrator | Monday 13 April 2026 00:56:10 +0000 (0:00:00.616) 0:09:46.899 ********** 2026-04-13 00:58:09.724519 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.724522 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.724526 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.724530 | orchestrator | 2026-04-13 00:58:09.724534 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-13 00:58:09.724537 | orchestrator | Monday 13 April 2026 00:56:10 +0000 (0:00:00.766) 0:09:47.666 ********** 2026-04-13 00:58:09.724541 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.724545 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.724549 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.724552 | orchestrator | 2026-04-13 00:58:09.724556 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-13 00:58:09.724560 | orchestrator | Monday 13 April 2026 00:56:11 +0000 (0:00:00.837) 0:09:48.503 ********** 2026-04-13 00:58:09.724563 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.724567 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.724571 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.724575 | orchestrator | 2026-04-13 00:58:09.724578 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-13 00:58:09.724582 | orchestrator | Monday 13 April 2026 00:56:12 +0000 (0:00:00.756) 0:09:49.260 ********** 2026-04-13 00:58:09.724586 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.724589 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.724593 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.724597 | orchestrator | 2026-04-13 00:58:09.724601 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-13 00:58:09.724604 | orchestrator | Monday 13 April 2026 00:56:13 +0000 (0:00:00.587) 0:09:49.847 ********** 2026-04-13 00:58:09.724608 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.724612 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.724616 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.724619 | orchestrator | 2026-04-13 00:58:09.724623 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-13 00:58:09.724627 | orchestrator | Monday 13 April 2026 00:56:13 +0000 (0:00:00.346) 0:09:50.194 ********** 2026-04-13 00:58:09.724631 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.724634 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.724638 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.724642 | orchestrator | 2026-04-13 00:58:09.724645 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-13 00:58:09.724649 | orchestrator | Monday 13 April 2026 00:56:13 +0000 (0:00:00.305) 0:09:50.500 ********** 2026-04-13 00:58:09.724653 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.724657 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.724660 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.724664 | orchestrator | 2026-04-13 00:58:09.724668 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-13 00:58:09.724674 | orchestrator | Monday 13 April 2026 00:56:14 +0000 (0:00:00.746) 0:09:51.247 ********** 2026-04-13 00:58:09.724678 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.724682 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.724685 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.724689 | orchestrator | 2026-04-13 00:58:09.724693 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-13 00:58:09.724697 | orchestrator | Monday 13 April 2026 00:56:15 +0000 (0:00:01.084) 0:09:52.331 ********** 2026-04-13 00:58:09.724700 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.724704 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.724708 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.724712 | orchestrator | 2026-04-13 00:58:09.724715 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-13 00:58:09.724719 | orchestrator | Monday 13 April 2026 00:56:15 +0000 (0:00:00.314) 0:09:52.646 ********** 2026-04-13 00:58:09.724723 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.724728 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.724732 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.724736 | orchestrator | 2026-04-13 00:58:09.724740 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-13 00:58:09.724743 | orchestrator | Monday 13 April 2026 00:56:16 +0000 (0:00:00.301) 0:09:52.948 ********** 2026-04-13 00:58:09.724747 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.724751 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.724755 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.724758 | orchestrator | 2026-04-13 00:58:09.724762 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-13 00:58:09.724766 | orchestrator | Monday 13 April 2026 00:56:16 +0000 (0:00:00.325) 0:09:53.273 ********** 2026-04-13 00:58:09.724770 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.724773 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.724777 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.724781 | orchestrator | 2026-04-13 00:58:09.724784 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-13 00:58:09.724788 | orchestrator | Monday 13 April 2026 00:56:17 +0000 (0:00:00.641) 0:09:53.914 ********** 2026-04-13 00:58:09.724792 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.724796 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.724799 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.724803 | orchestrator | 2026-04-13 00:58:09.724807 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-13 00:58:09.724810 | orchestrator | Monday 13 April 2026 00:56:17 +0000 (0:00:00.360) 0:09:54.275 ********** 2026-04-13 00:58:09.724814 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.724818 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.724822 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.724825 | orchestrator | 2026-04-13 00:58:09.724829 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-13 00:58:09.724833 | orchestrator | Monday 13 April 2026 00:56:17 +0000 (0:00:00.307) 0:09:54.583 ********** 2026-04-13 00:58:09.724837 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.724840 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.724844 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.724848 | orchestrator | 2026-04-13 00:58:09.724852 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-13 00:58:09.724859 | orchestrator | Monday 13 April 2026 00:56:18 +0000 (0:00:00.266) 0:09:54.849 ********** 2026-04-13 00:58:09.724863 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.724867 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.724870 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.724874 | orchestrator | 2026-04-13 00:58:09.724878 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-13 00:58:09.724882 | orchestrator | Monday 13 April 2026 00:56:18 +0000 (0:00:00.426) 0:09:55.276 ********** 2026-04-13 00:58:09.724889 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.724893 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.724896 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.724900 | orchestrator | 2026-04-13 00:58:09.724904 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-13 00:58:09.724908 | orchestrator | Monday 13 April 2026 00:56:18 +0000 (0:00:00.291) 0:09:55.568 ********** 2026-04-13 00:58:09.724911 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.724915 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.724919 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.724922 | orchestrator | 2026-04-13 00:58:09.724926 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-13 00:58:09.724930 | orchestrator | Monday 13 April 2026 00:56:19 +0000 (0:00:00.470) 0:09:56.038 ********** 2026-04-13 00:58:09.724934 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.724937 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.724941 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-04-13 00:58:09.724945 | orchestrator | 2026-04-13 00:58:09.724949 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-04-13 00:58:09.724952 | orchestrator | Monday 13 April 2026 00:56:19 +0000 (0:00:00.508) 0:09:56.546 ********** 2026-04-13 00:58:09.724956 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-13 00:58:09.724962 | orchestrator | 2026-04-13 00:58:09.724968 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-04-13 00:58:09.724974 | orchestrator | Monday 13 April 2026 00:56:22 +0000 (0:00:02.247) 0:09:58.794 ********** 2026-04-13 00:58:09.724981 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-04-13 00:58:09.724988 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.724994 | orchestrator | 2026-04-13 00:58:09.725000 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-04-13 00:58:09.725007 | orchestrator | Monday 13 April 2026 00:56:22 +0000 (0:00:00.256) 0:09:59.050 ********** 2026-04-13 00:58:09.725013 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-13 00:58:09.725022 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-13 00:58:09.725026 | orchestrator | 2026-04-13 00:58:09.725030 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-04-13 00:58:09.725033 | orchestrator | Monday 13 April 2026 00:56:30 +0000 (0:00:08.579) 0:10:07.629 ********** 2026-04-13 00:58:09.725040 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-13 00:58:09.725044 | orchestrator | 2026-04-13 00:58:09.725047 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-13 00:58:09.725051 | orchestrator | Monday 13 April 2026 00:56:34 +0000 (0:00:03.937) 0:10:11.567 ********** 2026-04-13 00:58:09.725055 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:58:09.725059 | orchestrator | 2026-04-13 00:58:09.725062 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-13 00:58:09.725066 | orchestrator | Monday 13 April 2026 00:56:35 +0000 (0:00:00.537) 0:10:12.105 ********** 2026-04-13 00:58:09.725070 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-13 00:58:09.725077 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-04-13 00:58:09.725081 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-13 00:58:09.725084 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-13 00:58:09.725088 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-04-13 00:58:09.725092 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-04-13 00:58:09.725095 | orchestrator | 2026-04-13 00:58:09.725099 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-13 00:58:09.725103 | orchestrator | Monday 13 April 2026 00:56:36 +0000 (0:00:01.445) 0:10:13.550 ********** 2026-04-13 00:58:09.725107 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:58:09.725110 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-13 00:58:09.725114 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-13 00:58:09.725118 | orchestrator | 2026-04-13 00:58:09.725121 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-13 00:58:09.725127 | orchestrator | Monday 13 April 2026 00:56:39 +0000 (0:00:02.288) 0:10:15.839 ********** 2026-04-13 00:58:09.725131 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-13 00:58:09.725135 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-13 00:58:09.725139 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:58:09.725143 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-13 00:58:09.725146 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-13 00:58:09.725150 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:58:09.725154 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-13 00:58:09.725157 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-13 00:58:09.725161 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:58:09.725165 | orchestrator | 2026-04-13 00:58:09.725168 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-13 00:58:09.725172 | orchestrator | Monday 13 April 2026 00:56:40 +0000 (0:00:01.217) 0:10:17.057 ********** 2026-04-13 00:58:09.725176 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:58:09.725180 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:58:09.725183 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:58:09.725187 | orchestrator | 2026-04-13 00:58:09.725191 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-13 00:58:09.725195 | orchestrator | Monday 13 April 2026 00:56:43 +0000 (0:00:02.728) 0:10:19.785 ********** 2026-04-13 00:58:09.725198 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.725202 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.725206 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.725209 | orchestrator | 2026-04-13 00:58:09.725213 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-13 00:58:09.725217 | orchestrator | Monday 13 April 2026 00:56:43 +0000 (0:00:00.618) 0:10:20.403 ********** 2026-04-13 00:58:09.725220 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:58:09.725224 | orchestrator | 2026-04-13 00:58:09.725228 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-13 00:58:09.725232 | orchestrator | Monday 13 April 2026 00:56:44 +0000 (0:00:00.589) 0:10:20.992 ********** 2026-04-13 00:58:09.725235 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:58:09.725239 | orchestrator | 2026-04-13 00:58:09.725243 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-13 00:58:09.725246 | orchestrator | Monday 13 April 2026 00:56:45 +0000 (0:00:00.799) 0:10:21.792 ********** 2026-04-13 00:58:09.725250 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:58:09.725254 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:58:09.725260 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:58:09.725264 | orchestrator | 2026-04-13 00:58:09.725268 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-13 00:58:09.725271 | orchestrator | Monday 13 April 2026 00:56:46 +0000 (0:00:01.413) 0:10:23.206 ********** 2026-04-13 00:58:09.725275 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:58:09.725279 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:58:09.725282 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:58:09.725286 | orchestrator | 2026-04-13 00:58:09.725290 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-13 00:58:09.725294 | orchestrator | Monday 13 April 2026 00:56:47 +0000 (0:00:01.189) 0:10:24.395 ********** 2026-04-13 00:58:09.725297 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:58:09.725301 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:58:09.725305 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:58:09.725308 | orchestrator | 2026-04-13 00:58:09.725312 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-13 00:58:09.725316 | orchestrator | Monday 13 April 2026 00:56:49 +0000 (0:00:01.807) 0:10:26.203 ********** 2026-04-13 00:58:09.725319 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:58:09.725325 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:58:09.725329 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:58:09.725348 | orchestrator | 2026-04-13 00:58:09.725352 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-13 00:58:09.725356 | orchestrator | Monday 13 April 2026 00:56:51 +0000 (0:00:02.251) 0:10:28.454 ********** 2026-04-13 00:58:09.725360 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.725364 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.725367 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.725371 | orchestrator | 2026-04-13 00:58:09.725375 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-13 00:58:09.725378 | orchestrator | Monday 13 April 2026 00:56:53 +0000 (0:00:01.224) 0:10:29.679 ********** 2026-04-13 00:58:09.725382 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:58:09.725386 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:58:09.725390 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:58:09.725393 | orchestrator | 2026-04-13 00:58:09.725397 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-13 00:58:09.725401 | orchestrator | Monday 13 April 2026 00:56:53 +0000 (0:00:00.982) 0:10:30.662 ********** 2026-04-13 00:58:09.725404 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:58:09.725408 | orchestrator | 2026-04-13 00:58:09.725412 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-13 00:58:09.725415 | orchestrator | Monday 13 April 2026 00:56:54 +0000 (0:00:00.651) 0:10:31.313 ********** 2026-04-13 00:58:09.725419 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.725423 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.725427 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.725430 | orchestrator | 2026-04-13 00:58:09.725434 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-13 00:58:09.725438 | orchestrator | Monday 13 April 2026 00:56:54 +0000 (0:00:00.320) 0:10:31.634 ********** 2026-04-13 00:58:09.725442 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:58:09.725445 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:58:09.725449 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:58:09.725453 | orchestrator | 2026-04-13 00:58:09.725456 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-13 00:58:09.725462 | orchestrator | Monday 13 April 2026 00:56:56 +0000 (0:00:01.507) 0:10:33.141 ********** 2026-04-13 00:58:09.725466 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-13 00:58:09.725470 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-13 00:58:09.725474 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-13 00:58:09.725480 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.725484 | orchestrator | 2026-04-13 00:58:09.725488 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-13 00:58:09.725492 | orchestrator | Monday 13 April 2026 00:56:57 +0000 (0:00:00.620) 0:10:33.762 ********** 2026-04-13 00:58:09.725495 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.725499 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.725503 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.725506 | orchestrator | 2026-04-13 00:58:09.725510 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-13 00:58:09.725514 | orchestrator | 2026-04-13 00:58:09.725518 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-13 00:58:09.725521 | orchestrator | Monday 13 April 2026 00:56:57 +0000 (0:00:00.566) 0:10:34.328 ********** 2026-04-13 00:58:09.725525 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:58:09.725529 | orchestrator | 2026-04-13 00:58:09.725533 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-13 00:58:09.725536 | orchestrator | Monday 13 April 2026 00:56:58 +0000 (0:00:00.778) 0:10:35.106 ********** 2026-04-13 00:58:09.725540 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:58:09.725544 | orchestrator | 2026-04-13 00:58:09.725547 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-13 00:58:09.725551 | orchestrator | Monday 13 April 2026 00:56:58 +0000 (0:00:00.537) 0:10:35.644 ********** 2026-04-13 00:58:09.725555 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.725559 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.725562 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.725566 | orchestrator | 2026-04-13 00:58:09.725570 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-13 00:58:09.725573 | orchestrator | Monday 13 April 2026 00:56:59 +0000 (0:00:00.314) 0:10:35.958 ********** 2026-04-13 00:58:09.725577 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.725581 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.725584 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.725588 | orchestrator | 2026-04-13 00:58:09.725592 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-13 00:58:09.725596 | orchestrator | Monday 13 April 2026 00:57:00 +0000 (0:00:01.117) 0:10:37.076 ********** 2026-04-13 00:58:09.725599 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.725603 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.725607 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.725610 | orchestrator | 2026-04-13 00:58:09.725614 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-13 00:58:09.725618 | orchestrator | Monday 13 April 2026 00:57:01 +0000 (0:00:00.787) 0:10:37.864 ********** 2026-04-13 00:58:09.725621 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.725625 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.725629 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.725633 | orchestrator | 2026-04-13 00:58:09.725636 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-13 00:58:09.725640 | orchestrator | Monday 13 April 2026 00:57:01 +0000 (0:00:00.785) 0:10:38.650 ********** 2026-04-13 00:58:09.725644 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.725648 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.725651 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.725655 | orchestrator | 2026-04-13 00:58:09.725661 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-13 00:58:09.725665 | orchestrator | Monday 13 April 2026 00:57:02 +0000 (0:00:00.328) 0:10:38.979 ********** 2026-04-13 00:58:09.725669 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.725672 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.725676 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.725682 | orchestrator | 2026-04-13 00:58:09.725686 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-13 00:58:09.725690 | orchestrator | Monday 13 April 2026 00:57:02 +0000 (0:00:00.593) 0:10:39.572 ********** 2026-04-13 00:58:09.725694 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.725697 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.725701 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.725705 | orchestrator | 2026-04-13 00:58:09.725709 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-13 00:58:09.725712 | orchestrator | Monday 13 April 2026 00:57:03 +0000 (0:00:00.321) 0:10:39.894 ********** 2026-04-13 00:58:09.725716 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.725720 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.725723 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.725734 | orchestrator | 2026-04-13 00:58:09.725738 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-13 00:58:09.725742 | orchestrator | Monday 13 April 2026 00:57:03 +0000 (0:00:00.696) 0:10:40.591 ********** 2026-04-13 00:58:09.725745 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.725749 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.725753 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.725763 | orchestrator | 2026-04-13 00:58:09.725766 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-13 00:58:09.725770 | orchestrator | Monday 13 April 2026 00:57:04 +0000 (0:00:00.773) 0:10:41.364 ********** 2026-04-13 00:58:09.725774 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.725778 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.725781 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.725785 | orchestrator | 2026-04-13 00:58:09.725789 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-13 00:58:09.725793 | orchestrator | Monday 13 April 2026 00:57:05 +0000 (0:00:00.621) 0:10:41.986 ********** 2026-04-13 00:58:09.725798 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.725802 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.725806 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.725810 | orchestrator | 2026-04-13 00:58:09.725813 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-13 00:58:09.725817 | orchestrator | Monday 13 April 2026 00:57:05 +0000 (0:00:00.326) 0:10:42.312 ********** 2026-04-13 00:58:09.725821 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.725825 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.725828 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.725832 | orchestrator | 2026-04-13 00:58:09.725836 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-13 00:58:09.725839 | orchestrator | Monday 13 April 2026 00:57:06 +0000 (0:00:00.364) 0:10:42.676 ********** 2026-04-13 00:58:09.725843 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.725847 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.725851 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.725854 | orchestrator | 2026-04-13 00:58:09.725858 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-13 00:58:09.725862 | orchestrator | Monday 13 April 2026 00:57:06 +0000 (0:00:00.329) 0:10:43.005 ********** 2026-04-13 00:58:09.725866 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.725869 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.725873 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.725877 | orchestrator | 2026-04-13 00:58:09.725880 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-13 00:58:09.725884 | orchestrator | Monday 13 April 2026 00:57:06 +0000 (0:00:00.659) 0:10:43.664 ********** 2026-04-13 00:58:09.725888 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.725892 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.725895 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.725899 | orchestrator | 2026-04-13 00:58:09.725903 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-13 00:58:09.725909 | orchestrator | Monday 13 April 2026 00:57:07 +0000 (0:00:00.334) 0:10:43.999 ********** 2026-04-13 00:58:09.725913 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.725917 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.725921 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.725924 | orchestrator | 2026-04-13 00:58:09.725928 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-13 00:58:09.725932 | orchestrator | Monday 13 April 2026 00:57:07 +0000 (0:00:00.320) 0:10:44.320 ********** 2026-04-13 00:58:09.725936 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.725939 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.725943 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.725947 | orchestrator | 2026-04-13 00:58:09.725950 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-13 00:58:09.725954 | orchestrator | Monday 13 April 2026 00:57:07 +0000 (0:00:00.302) 0:10:44.623 ********** 2026-04-13 00:58:09.725958 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.725962 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.725965 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.725969 | orchestrator | 2026-04-13 00:58:09.725973 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-13 00:58:09.725977 | orchestrator | Monday 13 April 2026 00:57:08 +0000 (0:00:00.618) 0:10:45.241 ********** 2026-04-13 00:58:09.725980 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.725984 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.725988 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.725992 | orchestrator | 2026-04-13 00:58:09.725995 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-13 00:58:09.725999 | orchestrator | Monday 13 April 2026 00:57:09 +0000 (0:00:00.596) 0:10:45.837 ********** 2026-04-13 00:58:09.726003 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:58:09.726007 | orchestrator | 2026-04-13 00:58:09.726010 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-13 00:58:09.726038 | orchestrator | Monday 13 April 2026 00:57:10 +0000 (0:00:00.866) 0:10:46.703 ********** 2026-04-13 00:58:09.726043 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:58:09.726046 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-13 00:58:09.726050 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-13 00:58:09.726054 | orchestrator | 2026-04-13 00:58:09.726058 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-13 00:58:09.726061 | orchestrator | Monday 13 April 2026 00:57:12 +0000 (0:00:02.329) 0:10:49.033 ********** 2026-04-13 00:58:09.726065 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-13 00:58:09.726069 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-13 00:58:09.726073 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:58:09.726076 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-13 00:58:09.726080 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-13 00:58:09.726084 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:58:09.726087 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-13 00:58:09.726091 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-13 00:58:09.726095 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:58:09.726099 | orchestrator | 2026-04-13 00:58:09.726102 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-13 00:58:09.726106 | orchestrator | Monday 13 April 2026 00:57:13 +0000 (0:00:01.288) 0:10:50.322 ********** 2026-04-13 00:58:09.726110 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.726114 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.726117 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.726121 | orchestrator | 2026-04-13 00:58:09.726125 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-13 00:58:09.726132 | orchestrator | Monday 13 April 2026 00:57:14 +0000 (0:00:00.350) 0:10:50.672 ********** 2026-04-13 00:58:09.726135 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:58:09.726139 | orchestrator | 2026-04-13 00:58:09.726143 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-13 00:58:09.726149 | orchestrator | Monday 13 April 2026 00:57:14 +0000 (0:00:00.826) 0:10:51.499 ********** 2026-04-13 00:58:09.726153 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-13 00:58:09.726157 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-13 00:58:09.726161 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-13 00:58:09.726164 | orchestrator | 2026-04-13 00:58:09.726168 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-13 00:58:09.726172 | orchestrator | Monday 13 April 2026 00:57:15 +0000 (0:00:00.800) 0:10:52.300 ********** 2026-04-13 00:58:09.726176 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:58:09.726179 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-13 00:58:09.726183 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:58:09.726187 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-13 00:58:09.726191 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:58:09.726195 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-13 00:58:09.726199 | orchestrator | 2026-04-13 00:58:09.726202 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-13 00:58:09.726206 | orchestrator | Monday 13 April 2026 00:57:20 +0000 (0:00:04.709) 0:10:57.009 ********** 2026-04-13 00:58:09.726210 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:58:09.726214 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-13 00:58:09.726217 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:58:09.726221 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-13 00:58:09.726225 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 00:58:09.726229 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-13 00:58:09.726232 | orchestrator | 2026-04-13 00:58:09.726236 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-13 00:58:09.726240 | orchestrator | Monday 13 April 2026 00:57:22 +0000 (0:00:02.489) 0:10:59.499 ********** 2026-04-13 00:58:09.726244 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-13 00:58:09.726247 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:58:09.726251 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-13 00:58:09.726255 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:58:09.726259 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-13 00:58:09.726263 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:58:09.726266 | orchestrator | 2026-04-13 00:58:09.726270 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-13 00:58:09.726276 | orchestrator | Monday 13 April 2026 00:57:24 +0000 (0:00:01.587) 0:11:01.087 ********** 2026-04-13 00:58:09.726280 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-04-13 00:58:09.726287 | orchestrator | 2026-04-13 00:58:09.726290 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-13 00:58:09.726294 | orchestrator | Monday 13 April 2026 00:57:24 +0000 (0:00:00.241) 0:11:01.328 ********** 2026-04-13 00:58:09.726298 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-13 00:58:09.726302 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-13 00:58:09.726306 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-13 00:58:09.726310 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-13 00:58:09.726314 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-13 00:58:09.726317 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.726321 | orchestrator | 2026-04-13 00:58:09.726325 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-13 00:58:09.726329 | orchestrator | Monday 13 April 2026 00:57:25 +0000 (0:00:00.606) 0:11:01.935 ********** 2026-04-13 00:58:09.726362 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-13 00:58:09.726367 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-13 00:58:09.726373 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-13 00:58:09.726377 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-13 00:58:09.726381 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-13 00:58:09.726384 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.726388 | orchestrator | 2026-04-13 00:58:09.726392 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-13 00:58:09.726396 | orchestrator | Monday 13 April 2026 00:57:25 +0000 (0:00:00.573) 0:11:02.508 ********** 2026-04-13 00:58:09.726399 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-13 00:58:09.726403 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-13 00:58:09.726407 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-13 00:58:09.726411 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-13 00:58:09.726415 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-13 00:58:09.726418 | orchestrator | 2026-04-13 00:58:09.726424 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-13 00:58:09.726429 | orchestrator | Monday 13 April 2026 00:57:55 +0000 (0:00:30.080) 0:11:32.589 ********** 2026-04-13 00:58:09.726436 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.726442 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.726447 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.726453 | orchestrator | 2026-04-13 00:58:09.726463 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-13 00:58:09.726469 | orchestrator | Monday 13 April 2026 00:57:56 +0000 (0:00:00.334) 0:11:32.924 ********** 2026-04-13 00:58:09.726475 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.726481 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.726486 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.726491 | orchestrator | 2026-04-13 00:58:09.726497 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-13 00:58:09.726502 | orchestrator | Monday 13 April 2026 00:57:56 +0000 (0:00:00.571) 0:11:33.495 ********** 2026-04-13 00:58:09.726508 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:58:09.726515 | orchestrator | 2026-04-13 00:58:09.726521 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-13 00:58:09.726526 | orchestrator | Monday 13 April 2026 00:57:57 +0000 (0:00:00.553) 0:11:34.048 ********** 2026-04-13 00:58:09.726533 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:58:09.726538 | orchestrator | 2026-04-13 00:58:09.726547 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-13 00:58:09.726553 | orchestrator | Monday 13 April 2026 00:57:58 +0000 (0:00:00.783) 0:11:34.832 ********** 2026-04-13 00:58:09.726559 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:58:09.726565 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:58:09.726572 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:58:09.726576 | orchestrator | 2026-04-13 00:58:09.726579 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-13 00:58:09.726583 | orchestrator | Monday 13 April 2026 00:57:59 +0000 (0:00:01.198) 0:11:36.030 ********** 2026-04-13 00:58:09.726587 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:58:09.726591 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:58:09.726594 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:58:09.726598 | orchestrator | 2026-04-13 00:58:09.726602 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-13 00:58:09.726605 | orchestrator | Monday 13 April 2026 00:58:00 +0000 (0:00:01.194) 0:11:37.225 ********** 2026-04-13 00:58:09.726609 | orchestrator | changed: [testbed-node-3] 2026-04-13 00:58:09.726613 | orchestrator | changed: [testbed-node-5] 2026-04-13 00:58:09.726616 | orchestrator | changed: [testbed-node-4] 2026-04-13 00:58:09.726620 | orchestrator | 2026-04-13 00:58:09.726624 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-13 00:58:09.726628 | orchestrator | Monday 13 April 2026 00:58:02 +0000 (0:00:02.052) 0:11:39.277 ********** 2026-04-13 00:58:09.726631 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-13 00:58:09.726635 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-13 00:58:09.726639 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-13 00:58:09.726643 | orchestrator | 2026-04-13 00:58:09.726646 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-13 00:58:09.726655 | orchestrator | Monday 13 April 2026 00:58:05 +0000 (0:00:02.793) 0:11:42.071 ********** 2026-04-13 00:58:09.726659 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.726663 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.726666 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.726670 | orchestrator | 2026-04-13 00:58:09.726674 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-13 00:58:09.726677 | orchestrator | Monday 13 April 2026 00:58:05 +0000 (0:00:00.377) 0:11:42.449 ********** 2026-04-13 00:58:09.726681 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 00:58:09.726688 | orchestrator | 2026-04-13 00:58:09.726692 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-13 00:58:09.726696 | orchestrator | Monday 13 April 2026 00:58:06 +0000 (0:00:00.832) 0:11:43.281 ********** 2026-04-13 00:58:09.726700 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.726703 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.726707 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.726711 | orchestrator | 2026-04-13 00:58:09.726714 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-13 00:58:09.726718 | orchestrator | Monday 13 April 2026 00:58:06 +0000 (0:00:00.322) 0:11:43.604 ********** 2026-04-13 00:58:09.726722 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.726726 | orchestrator | skipping: [testbed-node-4] 2026-04-13 00:58:09.726729 | orchestrator | skipping: [testbed-node-5] 2026-04-13 00:58:09.726733 | orchestrator | 2026-04-13 00:58:09.726737 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-13 00:58:09.726740 | orchestrator | Monday 13 April 2026 00:58:07 +0000 (0:00:00.386) 0:11:43.990 ********** 2026-04-13 00:58:09.726744 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-13 00:58:09.726748 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-13 00:58:09.726752 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-13 00:58:09.726755 | orchestrator | skipping: [testbed-node-3] 2026-04-13 00:58:09.726759 | orchestrator | 2026-04-13 00:58:09.726763 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-13 00:58:09.726766 | orchestrator | Monday 13 April 2026 00:58:08 +0000 (0:00:00.875) 0:11:44.866 ********** 2026-04-13 00:58:09.726770 | orchestrator | ok: [testbed-node-3] 2026-04-13 00:58:09.726774 | orchestrator | ok: [testbed-node-4] 2026-04-13 00:58:09.726778 | orchestrator | ok: [testbed-node-5] 2026-04-13 00:58:09.726781 | orchestrator | 2026-04-13 00:58:09.726785 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 00:58:09.726789 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-04-13 00:58:09.726793 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-04-13 00:58:09.726797 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-04-13 00:58:09.726800 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-04-13 00:58:09.726804 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-04-13 00:58:09.726811 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-04-13 00:58:09.726814 | orchestrator | 2026-04-13 00:58:09.726818 | orchestrator | 2026-04-13 00:58:09.726822 | orchestrator | 2026-04-13 00:58:09.726826 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 00:58:09.726829 | orchestrator | Monday 13 April 2026 00:58:08 +0000 (0:00:00.517) 0:11:45.384 ********** 2026-04-13 00:58:09.726833 | orchestrator | =============================================================================== 2026-04-13 00:58:09.726837 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 67.32s 2026-04-13 00:58:09.726841 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 43.93s 2026-04-13 00:58:09.726844 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.96s 2026-04-13 00:58:09.726848 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.08s 2026-04-13 00:58:09.726855 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.88s 2026-04-13 00:58:09.726859 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.04s 2026-04-13 00:58:09.726863 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 13.11s 2026-04-13 00:58:09.726866 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.81s 2026-04-13 00:58:09.726870 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.35s 2026-04-13 00:58:09.726874 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.58s 2026-04-13 00:58:09.726878 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 7.85s 2026-04-13 00:58:09.726881 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.11s 2026-04-13 00:58:09.726885 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.49s 2026-04-13 00:58:09.726889 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.80s 2026-04-13 00:58:09.726893 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.71s 2026-04-13 00:58:09.726899 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.15s 2026-04-13 00:58:09.726903 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.94s 2026-04-13 00:58:09.726906 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.82s 2026-04-13 00:58:09.726910 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.53s 2026-04-13 00:58:09.726914 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.17s 2026-04-13 00:58:09.726917 | orchestrator | 2026-04-13 00:58:09 | INFO  | Task 74dea42e-1c24-4c56-8397-2cf6aca7c4b7 is in state STARTED 2026-04-13 00:58:09.726921 | orchestrator | 2026-04-13 00:58:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:12.761841 | orchestrator | 2026-04-13 00:58:12 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 00:58:12.763358 | orchestrator | 2026-04-13 00:58:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:58:12.765166 | orchestrator | 2026-04-13 00:58:12 | INFO  | Task 74dea42e-1c24-4c56-8397-2cf6aca7c4b7 is in state SUCCESS 2026-04-13 00:58:12.766506 | orchestrator | 2026-04-13 00:58:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:15.807766 | orchestrator | 2026-04-13 00:58:15 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 00:58:15.809927 | orchestrator | 2026-04-13 00:58:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:58:15.809980 | orchestrator | 2026-04-13 00:58:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:18.856919 | orchestrator | 2026-04-13 00:58:18 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 00:58:18.858220 | orchestrator | 2026-04-13 00:58:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:58:18.858257 | orchestrator | 2026-04-13 00:58:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:21.905789 | orchestrator | 2026-04-13 00:58:21 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 00:58:21.906716 | orchestrator | 2026-04-13 00:58:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:58:21.906763 | orchestrator | 2026-04-13 00:58:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:24.945247 | orchestrator | 2026-04-13 00:58:24 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 00:58:24.946523 | orchestrator | 2026-04-13 00:58:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:58:24.947015 | orchestrator | 2026-04-13 00:58:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:27.997676 | orchestrator | 2026-04-13 00:58:27 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 00:58:27.998996 | orchestrator | 2026-04-13 00:58:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:58:27.999069 | orchestrator | 2026-04-13 00:58:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:31.059490 | orchestrator | 2026-04-13 00:58:31 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 00:58:31.062849 | orchestrator | 2026-04-13 00:58:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:58:31.062927 | orchestrator | 2026-04-13 00:58:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:34.108883 | orchestrator | 2026-04-13 00:58:34 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 00:58:34.110526 | orchestrator | 2026-04-13 00:58:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:58:34.110559 | orchestrator | 2026-04-13 00:58:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:37.171269 | orchestrator | 2026-04-13 00:58:37 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 00:58:37.173196 | orchestrator | 2026-04-13 00:58:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:58:37.175012 | orchestrator | 2026-04-13 00:58:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:40.217783 | orchestrator | 2026-04-13 00:58:40 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 00:58:40.218783 | orchestrator | 2026-04-13 00:58:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:58:40.218821 | orchestrator | 2026-04-13 00:58:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:43.269299 | orchestrator | 2026-04-13 00:58:43 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 00:58:43.269799 | orchestrator | 2026-04-13 00:58:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:58:43.269853 | orchestrator | 2026-04-13 00:58:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:46.315142 | orchestrator | 2026-04-13 00:58:46 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 00:58:46.317291 | orchestrator | 2026-04-13 00:58:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:58:46.317805 | orchestrator | 2026-04-13 00:58:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:49.368663 | orchestrator | 2026-04-13 00:58:49 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 00:58:49.369727 | orchestrator | 2026-04-13 00:58:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:58:49.369766 | orchestrator | 2026-04-13 00:58:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:52.414252 | orchestrator | 2026-04-13 00:58:52 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 00:58:52.416131 | orchestrator | 2026-04-13 00:58:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:58:52.416461 | orchestrator | 2026-04-13 00:58:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:55.468275 | orchestrator | 2026-04-13 00:58:55 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 00:58:55.470207 | orchestrator | 2026-04-13 00:58:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:58:55.470728 | orchestrator | 2026-04-13 00:58:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:58:58.522416 | orchestrator | 2026-04-13 00:58:58 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 00:58:58.524054 | orchestrator | 2026-04-13 00:58:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:58:58.524073 | orchestrator | 2026-04-13 00:58:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:01.571238 | orchestrator | 2026-04-13 00:59:01 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 00:59:01.573241 | orchestrator | 2026-04-13 00:59:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:59:01.573405 | orchestrator | 2026-04-13 00:59:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:04.614915 | orchestrator | 2026-04-13 00:59:04 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 00:59:04.615730 | orchestrator | 2026-04-13 00:59:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:59:04.615753 | orchestrator | 2026-04-13 00:59:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:07.664281 | orchestrator | 2026-04-13 00:59:07 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 00:59:07.664621 | orchestrator | 2026-04-13 00:59:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:59:07.664935 | orchestrator | 2026-04-13 00:59:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:10.713135 | orchestrator | 2026-04-13 00:59:10 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 00:59:10.714804 | orchestrator | 2026-04-13 00:59:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:59:10.714889 | orchestrator | 2026-04-13 00:59:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:13.764715 | orchestrator | 2026-04-13 00:59:13 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 00:59:13.765097 | orchestrator | 2026-04-13 00:59:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:59:13.765123 | orchestrator | 2026-04-13 00:59:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:16.808879 | orchestrator | 2026-04-13 00:59:16 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 00:59:16.809579 | orchestrator | 2026-04-13 00:59:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:59:16.809751 | orchestrator | 2026-04-13 00:59:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:19.860873 | orchestrator | 2026-04-13 00:59:19 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 00:59:19.862278 | orchestrator | 2026-04-13 00:59:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:59:19.862426 | orchestrator | 2026-04-13 00:59:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:22.909881 | orchestrator | 2026-04-13 00:59:22 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 00:59:22.910947 | orchestrator | 2026-04-13 00:59:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:59:22.910975 | orchestrator | 2026-04-13 00:59:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:25.958946 | orchestrator | 2026-04-13 00:59:25 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 00:59:25.962140 | orchestrator | 2026-04-13 00:59:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:59:25.962209 | orchestrator | 2026-04-13 00:59:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:29.010573 | orchestrator | 2026-04-13 00:59:29 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 00:59:29.011816 | orchestrator | 2026-04-13 00:59:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:59:29.011864 | orchestrator | 2026-04-13 00:59:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:32.057268 | orchestrator | 2026-04-13 00:59:32 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 00:59:32.059035 | orchestrator | 2026-04-13 00:59:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:59:32.059094 | orchestrator | 2026-04-13 00:59:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:35.097171 | orchestrator | 2026-04-13 00:59:35 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 00:59:35.100032 | orchestrator | 2026-04-13 00:59:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:59:35.100129 | orchestrator | 2026-04-13 00:59:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:38.152950 | orchestrator | 2026-04-13 00:59:38 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 00:59:38.155438 | orchestrator | 2026-04-13 00:59:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:59:38.155519 | orchestrator | 2026-04-13 00:59:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:41.204388 | orchestrator | 2026-04-13 00:59:41 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 00:59:41.204494 | orchestrator | 2026-04-13 00:59:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:59:41.204506 | orchestrator | 2026-04-13 00:59:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:44.249853 | orchestrator | 2026-04-13 00:59:44 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 00:59:44.250528 | orchestrator | 2026-04-13 00:59:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:59:44.250571 | orchestrator | 2026-04-13 00:59:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:47.299632 | orchestrator | 2026-04-13 00:59:47 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 00:59:47.301880 | orchestrator | 2026-04-13 00:59:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:59:47.301965 | orchestrator | 2026-04-13 00:59:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:50.347493 | orchestrator | 2026-04-13 00:59:50 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 00:59:50.350679 | orchestrator | 2026-04-13 00:59:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:59:50.350769 | orchestrator | 2026-04-13 00:59:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:53.400495 | orchestrator | 2026-04-13 00:59:53 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 00:59:53.401534 | orchestrator | 2026-04-13 00:59:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:59:53.401621 | orchestrator | 2026-04-13 00:59:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:56.446484 | orchestrator | 2026-04-13 00:59:56 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 00:59:56.448058 | orchestrator | 2026-04-13 00:59:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:59:56.448105 | orchestrator | 2026-04-13 00:59:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 00:59:59.506363 | orchestrator | 2026-04-13 00:59:59 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 00:59:59.508035 | orchestrator | 2026-04-13 00:59:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 00:59:59.508075 | orchestrator | 2026-04-13 00:59:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:02.555875 | orchestrator | 2026-04-13 01:00:02 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 01:00:02.557672 | orchestrator | 2026-04-13 01:00:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:00:02.557717 | orchestrator | 2026-04-13 01:00:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:05.605700 | orchestrator | 2026-04-13 01:00:05 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 01:00:05.605810 | orchestrator | 2026-04-13 01:00:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:00:05.605824 | orchestrator | 2026-04-13 01:00:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:08.653478 | orchestrator | 2026-04-13 01:00:08 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 01:00:08.653713 | orchestrator | 2026-04-13 01:00:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:00:08.653739 | orchestrator | 2026-04-13 01:00:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:11.698559 | orchestrator | 2026-04-13 01:00:11 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 01:00:11.699586 | orchestrator | 2026-04-13 01:00:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:00:11.699629 | orchestrator | 2026-04-13 01:00:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:14.737207 | orchestrator | 2026-04-13 01:00:14 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 01:00:14.737374 | orchestrator | 2026-04-13 01:00:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:00:14.737393 | orchestrator | 2026-04-13 01:00:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:17.782962 | orchestrator | 2026-04-13 01:00:17 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 01:00:17.784057 | orchestrator | 2026-04-13 01:00:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:00:17.784118 | orchestrator | 2026-04-13 01:00:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:20.834376 | orchestrator | 2026-04-13 01:00:20 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state STARTED 2026-04-13 01:00:20.835886 | orchestrator | 2026-04-13 01:00:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:00:20.835932 | orchestrator | 2026-04-13 01:00:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:23.884840 | orchestrator | 2026-04-13 01:00:23 | INFO  | Task dcc3869e-539c-4001-96cc-7039f4472a8c is in state SUCCESS 2026-04-13 01:00:23.887924 | orchestrator | 2026-04-13 01:00:23.888031 | orchestrator | 2026-04-13 01:00:23.888050 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 01:00:23.888064 | orchestrator | 2026-04-13 01:00:23.888075 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 01:00:23.888086 | orchestrator | Monday 13 April 2026 00:57:15 +0000 (0:00:00.325) 0:00:00.325 ********** 2026-04-13 01:00:23.888097 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:00:23.888109 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:00:23.888121 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:00:23.888140 | orchestrator | 2026-04-13 01:00:23.888158 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 01:00:23.888176 | orchestrator | Monday 13 April 2026 00:57:16 +0000 (0:00:00.282) 0:00:00.608 ********** 2026-04-13 01:00:23.888194 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-04-13 01:00:23.888212 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-04-13 01:00:23.888230 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-04-13 01:00:23.888276 | orchestrator | 2026-04-13 01:00:23.888295 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-04-13 01:00:23.888313 | orchestrator | 2026-04-13 01:00:23.888330 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-13 01:00:23.888349 | orchestrator | Monday 13 April 2026 00:57:16 +0000 (0:00:00.291) 0:00:00.899 ********** 2026-04-13 01:00:23.888368 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:00:23.888388 | orchestrator | 2026-04-13 01:00:23.888439 | orchestrator | TASK [service-ks-register : magnum | Creating/deleting services] *************** 2026-04-13 01:00:23.888460 | orchestrator | Monday 13 April 2026 00:57:16 +0000 (0:00:00.653) 0:00:01.552 ********** 2026-04-13 01:00:23.888479 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (5 retries left). 2026-04-13 01:00:23.888498 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (4 retries left). 2026-04-13 01:00:23.888590 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (3 retries left). 2026-04-13 01:00:23.888613 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (2 retries left). 2026-04-13 01:00:23.888635 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (1 retries left). 2026-04-13 01:00:23.888659 | orchestrator | failed: [testbed-node-0] (item=magnum (container-infra)) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Container Infrastructure Management Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9511/v1"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9511/v1"}], "name": "magnum", "type": "container-infra"}, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-13 01:00:23.888684 | orchestrator | 2026-04-13 01:00:23.888706 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 01:00:23.888729 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-04-13 01:00:23.888751 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 01:00:23.888774 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 01:00:23.888792 | orchestrator | 2026-04-13 01:00:23.888812 | orchestrator | 2026-04-13 01:00:23.888832 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 01:00:23.888851 | orchestrator | Monday 13 April 2026 00:58:10 +0000 (0:00:53.588) 0:00:55.141 ********** 2026-04-13 01:00:23.888999 | orchestrator | =============================================================================== 2026-04-13 01:00:23.889037 | orchestrator | service-ks-register : magnum | Creating/deleting services -------------- 53.59s 2026-04-13 01:00:23.889048 | orchestrator | magnum : include_tasks -------------------------------------------------- 0.65s 2026-04-13 01:00:23.889059 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.29s 2026-04-13 01:00:23.889070 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2026-04-13 01:00:23.889081 | orchestrator | 2026-04-13 01:00:23.889092 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-13 01:00:23.889103 | orchestrator | 2.16.14 2026-04-13 01:00:23.889114 | orchestrator | 2026-04-13 01:00:23.889125 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-04-13 01:00:23.889136 | orchestrator | 2026-04-13 01:00:23.889147 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-13 01:00:23.889158 | orchestrator | Monday 13 April 2026 00:58:14 +0000 (0:00:00.599) 0:00:00.599 ********** 2026-04-13 01:00:23.889169 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 01:00:23.889180 | orchestrator | 2026-04-13 01:00:23.889191 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-13 01:00:23.889202 | orchestrator | Monday 13 April 2026 00:58:14 +0000 (0:00:00.635) 0:00:01.234 ********** 2026-04-13 01:00:23.889213 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:00:23.889223 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:00:23.889234 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:00:23.889562 | orchestrator | 2026-04-13 01:00:23.889584 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-13 01:00:23.889656 | orchestrator | Monday 13 April 2026 00:58:15 +0000 (0:00:00.988) 0:00:02.223 ********** 2026-04-13 01:00:23.889679 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:00:23.889707 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:00:23.889730 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:00:23.889747 | orchestrator | 2026-04-13 01:00:23.889764 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-13 01:00:23.889781 | orchestrator | Monday 13 April 2026 00:58:15 +0000 (0:00:00.292) 0:00:02.516 ********** 2026-04-13 01:00:23.889797 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:00:23.889815 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:00:23.889834 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:00:23.889852 | orchestrator | 2026-04-13 01:00:23.889871 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-13 01:00:23.889888 | orchestrator | Monday 13 April 2026 00:58:16 +0000 (0:00:00.802) 0:00:03.318 ********** 2026-04-13 01:00:23.889906 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:00:23.889923 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:00:23.889942 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:00:23.889961 | orchestrator | 2026-04-13 01:00:23.889980 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-13 01:00:23.889999 | orchestrator | Monday 13 April 2026 00:58:17 +0000 (0:00:00.318) 0:00:03.637 ********** 2026-04-13 01:00:23.890075 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:00:23.890092 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:00:23.890102 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:00:23.890111 | orchestrator | 2026-04-13 01:00:23.890121 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-13 01:00:23.890142 | orchestrator | Monday 13 April 2026 00:58:17 +0000 (0:00:00.312) 0:00:03.950 ********** 2026-04-13 01:00:23.890152 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:00:23.890162 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:00:23.890172 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:00:23.890181 | orchestrator | 2026-04-13 01:00:23.890191 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-13 01:00:23.890201 | orchestrator | Monday 13 April 2026 00:58:17 +0000 (0:00:00.312) 0:00:04.262 ********** 2026-04-13 01:00:23.890224 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:00:23.890235 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:00:23.890264 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:00:23.890275 | orchestrator | 2026-04-13 01:00:23.890285 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-13 01:00:23.890294 | orchestrator | Monday 13 April 2026 00:58:18 +0000 (0:00:00.517) 0:00:04.780 ********** 2026-04-13 01:00:23.890304 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:00:23.890314 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:00:23.890323 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:00:23.890333 | orchestrator | 2026-04-13 01:00:23.890342 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-13 01:00:23.890353 | orchestrator | Monday 13 April 2026 00:58:18 +0000 (0:00:00.298) 0:00:05.078 ********** 2026-04-13 01:00:23.890362 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-13 01:00:23.890372 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-13 01:00:23.890381 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-13 01:00:23.890391 | orchestrator | 2026-04-13 01:00:23.890401 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-13 01:00:23.890410 | orchestrator | Monday 13 April 2026 00:58:19 +0000 (0:00:00.623) 0:00:05.702 ********** 2026-04-13 01:00:23.890420 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:00:23.890430 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:00:23.890439 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:00:23.890449 | orchestrator | 2026-04-13 01:00:23.890459 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-13 01:00:23.890468 | orchestrator | Monday 13 April 2026 00:58:19 +0000 (0:00:00.430) 0:00:06.133 ********** 2026-04-13 01:00:23.890478 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-13 01:00:23.890487 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-13 01:00:23.890497 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-13 01:00:23.890507 | orchestrator | 2026-04-13 01:00:23.890516 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-13 01:00:23.890526 | orchestrator | Monday 13 April 2026 00:58:22 +0000 (0:00:03.069) 0:00:09.202 ********** 2026-04-13 01:00:23.890536 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-13 01:00:23.890546 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-13 01:00:23.890556 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-13 01:00:23.890565 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:00:23.890575 | orchestrator | 2026-04-13 01:00:23.890585 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-13 01:00:23.890594 | orchestrator | Monday 13 April 2026 00:58:23 +0000 (0:00:00.424) 0:00:09.626 ********** 2026-04-13 01:00:23.890606 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.890619 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.890678 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.890691 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:00:23.890701 | orchestrator | 2026-04-13 01:00:23.890711 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-13 01:00:23.890729 | orchestrator | Monday 13 April 2026 00:58:23 +0000 (0:00:00.845) 0:00:10.472 ********** 2026-04-13 01:00:23.890740 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.890760 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.890770 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.890780 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:00:23.890790 | orchestrator | 2026-04-13 01:00:23.890800 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-13 01:00:23.890810 | orchestrator | Monday 13 April 2026 00:58:24 +0000 (0:00:00.183) 0:00:10.656 ********** 2026-04-13 01:00:23.890821 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '9cff66c3b414', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-13 00:58:20.574236', 'end': '2026-04-13 00:58:20.617053', 'delta': '0:00:00.042817', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9cff66c3b414'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-13 01:00:23.890835 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '7d6a2f3a2fab', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-13 00:58:21.672277', 'end': '2026-04-13 00:58:21.706157', 'delta': '0:00:00.033880', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7d6a2f3a2fab'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-13 01:00:23.890845 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'c3e0f20542c7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-13 00:58:22.469469', 'end': '2026-04-13 00:58:22.510669', 'delta': '0:00:00.041200', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c3e0f20542c7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-13 01:00:23.890865 | orchestrator | 2026-04-13 01:00:23.890875 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-13 01:00:23.890912 | orchestrator | Monday 13 April 2026 00:58:24 +0000 (0:00:00.403) 0:00:11.060 ********** 2026-04-13 01:00:23.890924 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:00:23.890934 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:00:23.890943 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:00:23.890953 | orchestrator | 2026-04-13 01:00:23.890963 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-13 01:00:23.890973 | orchestrator | Monday 13 April 2026 00:58:24 +0000 (0:00:00.423) 0:00:11.483 ********** 2026-04-13 01:00:23.890983 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-13 01:00:23.890993 | orchestrator | 2026-04-13 01:00:23.891002 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-13 01:00:23.891013 | orchestrator | Monday 13 April 2026 00:58:26 +0000 (0:00:01.749) 0:00:13.233 ********** 2026-04-13 01:00:23.891022 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:00:23.891032 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:00:23.891042 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:00:23.891052 | orchestrator | 2026-04-13 01:00:23.891061 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-13 01:00:23.891071 | orchestrator | Monday 13 April 2026 00:58:26 +0000 (0:00:00.318) 0:00:13.551 ********** 2026-04-13 01:00:23.891081 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:00:23.891091 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:00:23.891101 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:00:23.891110 | orchestrator | 2026-04-13 01:00:23.891120 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-13 01:00:23.891142 | orchestrator | Monday 13 April 2026 00:58:27 +0000 (0:00:00.394) 0:00:13.946 ********** 2026-04-13 01:00:23.891152 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:00:23.891162 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:00:23.891172 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:00:23.891181 | orchestrator | 2026-04-13 01:00:23.891191 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-13 01:00:23.891201 | orchestrator | Monday 13 April 2026 00:58:27 +0000 (0:00:00.521) 0:00:14.467 ********** 2026-04-13 01:00:23.891210 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:00:23.891220 | orchestrator | 2026-04-13 01:00:23.891230 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-13 01:00:23.891240 | orchestrator | Monday 13 April 2026 00:58:28 +0000 (0:00:00.140) 0:00:14.607 ********** 2026-04-13 01:00:23.891269 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:00:23.891279 | orchestrator | 2026-04-13 01:00:23.891289 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-13 01:00:23.891299 | orchestrator | Monday 13 April 2026 00:58:28 +0000 (0:00:00.241) 0:00:14.849 ********** 2026-04-13 01:00:23.891308 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:00:23.891318 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:00:23.891328 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:00:23.891337 | orchestrator | 2026-04-13 01:00:23.891347 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-13 01:00:23.891357 | orchestrator | Monday 13 April 2026 00:58:28 +0000 (0:00:00.275) 0:00:15.125 ********** 2026-04-13 01:00:23.891367 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:00:23.891377 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:00:23.891387 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:00:23.891397 | orchestrator | 2026-04-13 01:00:23.891407 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-13 01:00:23.891417 | orchestrator | Monday 13 April 2026 00:58:28 +0000 (0:00:00.329) 0:00:15.454 ********** 2026-04-13 01:00:23.891426 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:00:23.891444 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:00:23.891454 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:00:23.891464 | orchestrator | 2026-04-13 01:00:23.891474 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-13 01:00:23.891484 | orchestrator | Monday 13 April 2026 00:58:29 +0000 (0:00:00.516) 0:00:15.970 ********** 2026-04-13 01:00:23.891493 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:00:23.891503 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:00:23.891513 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:00:23.891523 | orchestrator | 2026-04-13 01:00:23.891533 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-13 01:00:23.891543 | orchestrator | Monday 13 April 2026 00:58:29 +0000 (0:00:00.333) 0:00:16.304 ********** 2026-04-13 01:00:23.891553 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:00:23.891562 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:00:23.891572 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:00:23.891582 | orchestrator | 2026-04-13 01:00:23.891592 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-13 01:00:23.891601 | orchestrator | Monday 13 April 2026 00:58:30 +0000 (0:00:00.327) 0:00:16.632 ********** 2026-04-13 01:00:23.891611 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:00:23.891621 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:00:23.891631 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:00:23.891640 | orchestrator | 2026-04-13 01:00:23.891650 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-13 01:00:23.891660 | orchestrator | Monday 13 April 2026 00:58:30 +0000 (0:00:00.327) 0:00:16.960 ********** 2026-04-13 01:00:23.891670 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:00:23.891679 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:00:23.891689 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:00:23.891699 | orchestrator | 2026-04-13 01:00:23.891709 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-13 01:00:23.891719 | orchestrator | Monday 13 April 2026 00:58:30 +0000 (0:00:00.524) 0:00:17.484 ********** 2026-04-13 01:00:23.891761 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9b6aa2f8--de46--5cb6--b1a4--58b08f65cf0a-osd--block--9b6aa2f8--de46--5cb6--b1a4--58b08f65cf0a', 'dm-uuid-LVM-cSl6EFC0vACy8fJ7BlSqjPds1pJgcxwLfXOzNyamFGRTM2VeT5WrO4p2XomDG9q7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-13 01:00:23.891775 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--100799fe--f0b8--5d68--80c9--d39d0aace7f9-osd--block--100799fe--f0b8--5d68--80c9--d39d0aace7f9', 'dm-uuid-LVM-IqJ6f8a9dcmLdR12gJUOXnHw7clvOZWm9FD367r6iAkFJPcS6r2dVm1z76pgeY48'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-13 01:00:23.891791 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 01:00:23.891802 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 01:00:23.891819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 01:00:23.891829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 01:00:23.891839 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 01:00:23.891849 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 01:00:23.891859 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 01:00:23.891898 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 01:00:23.891917 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5', 'scsi-SQEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5-part1', 'scsi-SQEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5-part14', 'scsi-SQEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5-part15', 'scsi-SQEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5-part16', 'scsi-SQEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 01:00:23.891937 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--9b6aa2f8--de46--5cb6--b1a4--58b08f65cf0a-osd--block--9b6aa2f8--de46--5cb6--b1a4--58b08f65cf0a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1IV4PH-Qc9i-ENDW-Z9pI-tJih-3vlb-22if96', 'scsi-0QEMU_QEMU_HARDDISK_70b2b286-75d2-4918-b809-b0d3c77d8089', 'scsi-SQEMU_QEMU_HARDDISK_70b2b286-75d2-4918-b809-b0d3c77d8089'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 01:00:23.891949 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--100799fe--f0b8--5d68--80c9--d39d0aace7f9-osd--block--100799fe--f0b8--5d68--80c9--d39d0aace7f9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-n4mNIH-HvKZ-CQXZ-jvFD-Dgf1-ia3W-N6c03E', 'scsi-0QEMU_QEMU_HARDDISK_e58cc4cd-c100-42fd-a854-9a07c2c5ceb1', 'scsi-SQEMU_QEMU_HARDDISK_e58cc4cd-c100-42fd-a854-9a07c2c5ceb1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 01:00:23.891989 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--586ba51f--dba7--5dcd--8710--1804179cab86-osd--block--586ba51f--dba7--5dcd--8710--1804179cab86', 'dm-uuid-LVM-8caEtY6MBEn2RdyAHnKISh0sKzPpSLh1PICaFdkYsf0qkm2dV0jOvEPAX71wUGht'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-13 01:00:23.892002 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ff476bc-ae0b-4cfd-96fa-c57a101f59cb', 'scsi-SQEMU_QEMU_HARDDISK_1ff476bc-ae0b-4cfd-96fa-c57a101f59cb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 01:00:23.892018 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--971aa970--5a40--5da7--9620--8f2c789358d2-osd--block--971aa970--5a40--5da7--9620--8f2c789358d2', 'dm-uuid-LVM-aFwIWAYFs8WYeXQaKcSMhdbdGZ2QSYf8M9Wn37p1lnEvd08xMlzmh3CEsSCBXnLt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-13 01:00:23.892034 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-13-00-02-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 01:00:23.892045 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 01:00:23.892055 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:00:23.892066 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 01:00:23.892076 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 01:00:23.892086 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 01:00:23.892125 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 01:00:23.892137 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 01:00:23.892152 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d9f8332f--65b5--5ad5--8d64--0b4e5e7cc000-osd--block--d9f8332f--65b5--5ad5--8d64--0b4e5e7cc000', 'dm-uuid-LVM-1txSuAOOptD8I4h4eKjXc96vtE7f6jWbC9BOAp4vhlWbCNsWn0IEIKhOWruHyV8G'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-13 01:00:23.892169 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 01:00:23.892179 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7331b6c9--9d3b--5dac--8499--53ee0940f196-osd--block--7331b6c9--9d3b--5dac--8499--53ee0940f196', 'dm-uuid-LVM-SQiielPvNiJjT4l9ezQgzn3ldkRcoUJzGPQdxfMc0JVwjrask2CEmaj4gQR7EVtA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-13 01:00:23.892190 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 01:00:23.892200 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 01:00:23.892223 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191', 'scsi-SQEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191-part1', 'scsi-SQEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191-part14', 'scsi-SQEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191-part15', 'scsi-SQEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191-part16', 'scsi-SQEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 01:00:23.892242 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 01:00:23.892308 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--586ba51f--dba7--5dcd--8710--1804179cab86-osd--block--586ba51f--dba7--5dcd--8710--1804179cab86'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fnwwdP-vgp1-BVac-ze3F-QwX8-FUj4-mA0ico', 'scsi-0QEMU_QEMU_HARDDISK_28faf471-35fc-493f-ba87-763b98edc4d7', 'scsi-SQEMU_QEMU_HARDDISK_28faf471-35fc-493f-ba87-763b98edc4d7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 01:00:23.892319 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 01:00:23.892330 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--971aa970--5a40--5da7--9620--8f2c789358d2-osd--block--971aa970--5a40--5da7--9620--8f2c789358d2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wjwTDS-mZPP-mWMN-37Tp-AE7E-Rqg4-v0jeB5', 'scsi-0QEMU_QEMU_HARDDISK_2d6b0ac7-37bd-44a3-98bf-24bee37418a9', 'scsi-SQEMU_QEMU_HARDDISK_2d6b0ac7-37bd-44a3-98bf-24bee37418a9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 01:00:23.892340 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 01:00:23.892360 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40b67a78-e903-4b7b-9416-2311a13eed69', 'scsi-SQEMU_QEMU_HARDDISK_40b67a78-e903-4b7b-9416-2311a13eed69'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 01:00:23.892371 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 01:00:23.892394 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-13-00-03-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 01:00:23.892405 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 01:00:23.892415 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:00:23.892426 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 01:00:23.892436 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-13 01:00:23.892455 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b', 'scsi-SQEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b-part1', 'scsi-SQEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b-part14', 'scsi-SQEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b-part15', 'scsi-SQEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b-part16', 'scsi-SQEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 01:00:23.892480 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d9f8332f--65b5--5ad5--8d64--0b4e5e7cc000-osd--block--d9f8332f--65b5--5ad5--8d64--0b4e5e7cc000'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KOrNQr-6C6l-bOv2-PHf1-dI8W-RMyy-ZtFrf4', 'scsi-0QEMU_QEMU_HARDDISK_5e205b26-74df-4a0d-a6b0-fd65d84e1df5', 'scsi-SQEMU_QEMU_HARDDISK_5e205b26-74df-4a0d-a6b0-fd65d84e1df5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 01:00:23.892491 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--7331b6c9--9d3b--5dac--8499--53ee0940f196-osd--block--7331b6c9--9d3b--5dac--8499--53ee0940f196'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rVbr9f-Be4n-dgvH-W7EQ-qBne-SxNz-hN6c4z', 'scsi-0QEMU_QEMU_HARDDISK_3fbef31d-44a1-4ae9-9145-86033c094687', 'scsi-SQEMU_QEMU_HARDDISK_3fbef31d-44a1-4ae9-9145-86033c094687'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 01:00:23.892502 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d506fd3a-4f98-4a08-a2bf-c3638f88932b', 'scsi-SQEMU_QEMU_HARDDISK_d506fd3a-4f98-4a08-a2bf-c3638f88932b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 01:00:23.892512 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-13-00-02-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-13 01:00:23.892523 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:00:23.892533 | orchestrator | 2026-04-13 01:00:23.892543 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-13 01:00:23.892553 | orchestrator | Monday 13 April 2026 00:58:31 +0000 (0:00:00.553) 0:00:18.038 ********** 2026-04-13 01:00:23.892570 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9b6aa2f8--de46--5cb6--b1a4--58b08f65cf0a-osd--block--9b6aa2f8--de46--5cb6--b1a4--58b08f65cf0a', 'dm-uuid-LVM-cSl6EFC0vACy8fJ7BlSqjPds1pJgcxwLfXOzNyamFGRTM2VeT5WrO4p2XomDG9q7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.892593 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--100799fe--f0b8--5d68--80c9--d39d0aace7f9-osd--block--100799fe--f0b8--5d68--80c9--d39d0aace7f9', 'dm-uuid-LVM-IqJ6f8a9dcmLdR12gJUOXnHw7clvOZWm9FD367r6iAkFJPcS6r2dVm1z76pgeY48'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.892603 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.892613 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.892624 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.892634 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.892650 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.892668 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.892683 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.892693 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--586ba51f--dba7--5dcd--8710--1804179cab86-osd--block--586ba51f--dba7--5dcd--8710--1804179cab86', 'dm-uuid-LVM-8caEtY6MBEn2RdyAHnKISh0sKzPpSLh1PICaFdkYsf0qkm2dV0jOvEPAX71wUGht'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.892704 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.892714 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--971aa970--5a40--5da7--9620--8f2c789358d2-osd--block--971aa970--5a40--5da7--9620--8f2c789358d2', 'dm-uuid-LVM-aFwIWAYFs8WYeXQaKcSMhdbdGZ2QSYf8M9Wn37p1lnEvd08xMlzmh3CEsSCBXnLt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.892739 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5', 'scsi-SQEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5-part1', 'scsi-SQEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5-part14', 'scsi-SQEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5-part15', 'scsi-SQEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5-part16', 'scsi-SQEMU_QEMU_HARDDISK_10c37310-1140-4628-b353-2a1f2074e1b5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.892757 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.892768 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--9b6aa2f8--de46--5cb6--b1a4--58b08f65cf0a-osd--block--9b6aa2f8--de46--5cb6--b1a4--58b08f65cf0a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1IV4PH-Qc9i-ENDW-Z9pI-tJih-3vlb-22if96', 'scsi-0QEMU_QEMU_HARDDISK_70b2b286-75d2-4918-b809-b0d3c77d8089', 'scsi-SQEMU_QEMU_HARDDISK_70b2b286-75d2-4918-b809-b0d3c77d8089'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.892779 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.892801 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--100799fe--f0b8--5d68--80c9--d39d0aace7f9-osd--block--100799fe--f0b8--5d68--80c9--d39d0aace7f9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-n4mNIH-HvKZ-CQXZ-jvFD-Dgf1-ia3W-N6c03E', 'scsi-0QEMU_QEMU_HARDDISK_e58cc4cd-c100-42fd-a854-9a07c2c5ceb1', 'scsi-SQEMU_QEMU_HARDDISK_e58cc4cd-c100-42fd-a854-9a07c2c5ceb1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.892816 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.892825 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ff476bc-ae0b-4cfd-96fa-c57a101f59cb', 'scsi-SQEMU_QEMU_HARDDISK_1ff476bc-ae0b-4cfd-96fa-c57a101f59cb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.892833 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.892841 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-13-00-02-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.892861 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.892870 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.892882 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.892890 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:00:23.892899 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.892912 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191', 'scsi-SQEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191-part1', 'scsi-SQEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191-part14', 'scsi-SQEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191-part15', 'scsi-SQEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191-part16', 'scsi-SQEMU_QEMU_HARDDISK_864d1fd1-7283-4358-a23f-be2c6ef28191-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.892931 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--586ba51f--dba7--5dcd--8710--1804179cab86-osd--block--586ba51f--dba7--5dcd--8710--1804179cab86'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fnwwdP-vgp1-BVac-ze3F-QwX8-FUj4-mA0ico', 'scsi-0QEMU_QEMU_HARDDISK_28faf471-35fc-493f-ba87-763b98edc4d7', 'scsi-SQEMU_QEMU_HARDDISK_28faf471-35fc-493f-ba87-763b98edc4d7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.892940 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--971aa970--5a40--5da7--9620--8f2c789358d2-osd--block--971aa970--5a40--5da7--9620--8f2c789358d2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wjwTDS-mZPP-mWMN-37Tp-AE7E-Rqg4-v0jeB5', 'scsi-0QEMU_QEMU_HARDDISK_2d6b0ac7-37bd-44a3-98bf-24bee37418a9', 'scsi-SQEMU_QEMU_HARDDISK_2d6b0ac7-37bd-44a3-98bf-24bee37418a9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.892949 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40b67a78-e903-4b7b-9416-2311a13eed69', 'scsi-SQEMU_QEMU_HARDDISK_40b67a78-e903-4b7b-9416-2311a13eed69'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.892958 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-13-00-03-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.892971 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:00:23.892986 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d9f8332f--65b5--5ad5--8d64--0b4e5e7cc000-osd--block--d9f8332f--65b5--5ad5--8d64--0b4e5e7cc000', 'dm-uuid-LVM-1txSuAOOptD8I4h4eKjXc96vtE7f6jWbC9BOAp4vhlWbCNsWn0IEIKhOWruHyV8G'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.892999 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7331b6c9--9d3b--5dac--8499--53ee0940f196-osd--block--7331b6c9--9d3b--5dac--8499--53ee0940f196', 'dm-uuid-LVM-SQiielPvNiJjT4l9ezQgzn3ldkRcoUJzGPQdxfMc0JVwjrask2CEmaj4gQR7EVtA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.893008 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.893016 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.893024 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.893033 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.893052 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.893061 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.893074 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.893083 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.893096 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b', 'scsi-SQEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b-part1', 'scsi-SQEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b-part14', 'scsi-SQEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b-part15', 'scsi-SQEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b-part16', 'scsi-SQEMU_QEMU_HARDDISK_2cf32096-6de7-4248-ae06-d0996d3d3c8b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.893114 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d9f8332f--65b5--5ad5--8d64--0b4e5e7cc000-osd--block--d9f8332f--65b5--5ad5--8d64--0b4e5e7cc000'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KOrNQr-6C6l-bOv2-PHf1-dI8W-RMyy-ZtFrf4', 'scsi-0QEMU_QEMU_HARDDISK_5e205b26-74df-4a0d-a6b0-fd65d84e1df5', 'scsi-SQEMU_QEMU_HARDDISK_5e205b26-74df-4a0d-a6b0-fd65d84e1df5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.893123 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--7331b6c9--9d3b--5dac--8499--53ee0940f196-osd--block--7331b6c9--9d3b--5dac--8499--53ee0940f196'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rVbr9f-Be4n-dgvH-W7EQ-qBne-SxNz-hN6c4z', 'scsi-0QEMU_QEMU_HARDDISK_3fbef31d-44a1-4ae9-9145-86033c094687', 'scsi-SQEMU_QEMU_HARDDISK_3fbef31d-44a1-4ae9-9145-86033c094687'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.893132 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d506fd3a-4f98-4a08-a2bf-c3638f88932b', 'scsi-SQEMU_QEMU_HARDDISK_d506fd3a-4f98-4a08-a2bf-c3638f88932b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.893145 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-13-00-02-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-13 01:00:23.893153 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:00:23.893161 | orchestrator | 2026-04-13 01:00:23.893170 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-13 01:00:23.893178 | orchestrator | Monday 13 April 2026 00:58:32 +0000 (0:00:00.667) 0:00:18.705 ********** 2026-04-13 01:00:23.893187 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:00:23.893195 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:00:23.893202 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:00:23.893211 | orchestrator | 2026-04-13 01:00:23.893219 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-13 01:00:23.893231 | orchestrator | Monday 13 April 2026 00:58:32 +0000 (0:00:00.659) 0:00:19.365 ********** 2026-04-13 01:00:23.893240 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:00:23.893262 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:00:23.893270 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:00:23.893278 | orchestrator | 2026-04-13 01:00:23.893286 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-13 01:00:23.893294 | orchestrator | Monday 13 April 2026 00:58:33 +0000 (0:00:00.476) 0:00:19.841 ********** 2026-04-13 01:00:23.893302 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:00:23.893310 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:00:23.893318 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:00:23.893326 | orchestrator | 2026-04-13 01:00:23.893334 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-13 01:00:23.893342 | orchestrator | Monday 13 April 2026 00:58:34 +0000 (0:00:00.758) 0:00:20.600 ********** 2026-04-13 01:00:23.893350 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:00:23.893358 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:00:23.893366 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:00:23.893374 | orchestrator | 2026-04-13 01:00:23.893382 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-13 01:00:23.893390 | orchestrator | Monday 13 April 2026 00:58:34 +0000 (0:00:00.285) 0:00:20.885 ********** 2026-04-13 01:00:23.893398 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:00:23.893406 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:00:23.893414 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:00:23.893421 | orchestrator | 2026-04-13 01:00:23.893429 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-13 01:00:23.893441 | orchestrator | Monday 13 April 2026 00:58:34 +0000 (0:00:00.421) 0:00:21.306 ********** 2026-04-13 01:00:23.893449 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:00:23.893457 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:00:23.893464 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:00:23.893472 | orchestrator | 2026-04-13 01:00:23.893480 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-13 01:00:23.893488 | orchestrator | Monday 13 April 2026 00:58:35 +0000 (0:00:00.531) 0:00:21.838 ********** 2026-04-13 01:00:23.893496 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-13 01:00:23.893504 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-13 01:00:23.893512 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-13 01:00:23.893525 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-13 01:00:23.893533 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-13 01:00:23.893541 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-13 01:00:23.893549 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-13 01:00:23.893556 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-13 01:00:23.893564 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-13 01:00:23.893572 | orchestrator | 2026-04-13 01:00:23.893580 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-13 01:00:23.893588 | orchestrator | Monday 13 April 2026 00:58:36 +0000 (0:00:00.833) 0:00:22.671 ********** 2026-04-13 01:00:23.893596 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-13 01:00:23.893604 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-13 01:00:23.893612 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-13 01:00:23.893619 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:00:23.893627 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-13 01:00:23.893635 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-13 01:00:23.893643 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-13 01:00:23.893650 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:00:23.893658 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-13 01:00:23.893666 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-13 01:00:23.893674 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-13 01:00:23.893682 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:00:23.893690 | orchestrator | 2026-04-13 01:00:23.893697 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-13 01:00:23.893705 | orchestrator | Monday 13 April 2026 00:58:36 +0000 (0:00:00.364) 0:00:23.036 ********** 2026-04-13 01:00:23.893713 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 01:00:23.893721 | orchestrator | 2026-04-13 01:00:23.893729 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-13 01:00:23.893737 | orchestrator | Monday 13 April 2026 00:58:37 +0000 (0:00:00.706) 0:00:23.742 ********** 2026-04-13 01:00:23.893745 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:00:23.893753 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:00:23.893761 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:00:23.893769 | orchestrator | 2026-04-13 01:00:23.893777 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-13 01:00:23.893785 | orchestrator | Monday 13 April 2026 00:58:37 +0000 (0:00:00.324) 0:00:24.067 ********** 2026-04-13 01:00:23.893792 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:00:23.893801 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:00:23.893809 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:00:23.893816 | orchestrator | 2026-04-13 01:00:23.893824 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-13 01:00:23.893832 | orchestrator | Monday 13 April 2026 00:58:37 +0000 (0:00:00.309) 0:00:24.377 ********** 2026-04-13 01:00:23.893840 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:00:23.893848 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:00:23.893856 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:00:23.893864 | orchestrator | 2026-04-13 01:00:23.893872 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-13 01:00:23.893880 | orchestrator | Monday 13 April 2026 00:58:38 +0000 (0:00:00.331) 0:00:24.708 ********** 2026-04-13 01:00:23.893892 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:00:23.893900 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:00:23.893908 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:00:23.893916 | orchestrator | 2026-04-13 01:00:23.893929 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-13 01:00:23.893937 | orchestrator | Monday 13 April 2026 00:58:38 +0000 (0:00:00.576) 0:00:25.285 ********** 2026-04-13 01:00:23.893945 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-13 01:00:23.893953 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-13 01:00:23.893961 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-13 01:00:23.893968 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:00:23.893976 | orchestrator | 2026-04-13 01:00:23.893984 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-13 01:00:23.893992 | orchestrator | Monday 13 April 2026 00:58:39 +0000 (0:00:00.381) 0:00:25.667 ********** 2026-04-13 01:00:23.893999 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-13 01:00:23.894007 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-13 01:00:23.894052 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-13 01:00:23.894062 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:00:23.894070 | orchestrator | 2026-04-13 01:00:23.894078 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-13 01:00:23.894086 | orchestrator | Monday 13 April 2026 00:58:39 +0000 (0:00:00.434) 0:00:26.101 ********** 2026-04-13 01:00:23.894098 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-13 01:00:23.894106 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-13 01:00:23.894114 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-13 01:00:23.894122 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:00:23.894130 | orchestrator | 2026-04-13 01:00:23.894137 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-13 01:00:23.894145 | orchestrator | Monday 13 April 2026 00:58:40 +0000 (0:00:00.465) 0:00:26.567 ********** 2026-04-13 01:00:23.894153 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:00:23.894161 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:00:23.894169 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:00:23.894177 | orchestrator | 2026-04-13 01:00:23.894185 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-13 01:00:23.894193 | orchestrator | Monday 13 April 2026 00:58:40 +0000 (0:00:00.388) 0:00:26.955 ********** 2026-04-13 01:00:23.894201 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-13 01:00:23.894209 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-13 01:00:23.894217 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-13 01:00:23.894225 | orchestrator | 2026-04-13 01:00:23.894233 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-13 01:00:23.894241 | orchestrator | Monday 13 April 2026 00:58:40 +0000 (0:00:00.528) 0:00:27.484 ********** 2026-04-13 01:00:23.894261 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-13 01:00:23.894269 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-13 01:00:23.894277 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-13 01:00:23.894285 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-13 01:00:23.894293 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-13 01:00:23.894302 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-13 01:00:23.894310 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-13 01:00:23.894318 | orchestrator | 2026-04-13 01:00:23.894326 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-13 01:00:23.894333 | orchestrator | Monday 13 April 2026 00:58:41 +0000 (0:00:01.056) 0:00:28.541 ********** 2026-04-13 01:00:23.894342 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-13 01:00:23.894350 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-13 01:00:23.894363 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-13 01:00:23.894371 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-13 01:00:23.894379 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-13 01:00:23.894386 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-13 01:00:23.894394 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-13 01:00:23.894402 | orchestrator | 2026-04-13 01:00:23.894411 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-04-13 01:00:23.894418 | orchestrator | Monday 13 April 2026 00:58:44 +0000 (0:00:02.024) 0:00:30.565 ********** 2026-04-13 01:00:23.894426 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:00:23.894434 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:00:23.894442 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-04-13 01:00:23.894450 | orchestrator | 2026-04-13 01:00:23.894458 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-04-13 01:00:23.894465 | orchestrator | Monday 13 April 2026 00:58:44 +0000 (0:00:00.382) 0:00:30.948 ********** 2026-04-13 01:00:23.894478 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-13 01:00:23.894489 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-13 01:00:23.894498 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-13 01:00:23.894506 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-13 01:00:23.894520 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-13 01:00:23.894528 | orchestrator | 2026-04-13 01:00:23.894536 | orchestrator | TASK [generate keys] *********************************************************** 2026-04-13 01:00:23.894544 | orchestrator | Monday 13 April 2026 00:59:27 +0000 (0:00:43.574) 0:01:14.522 ********** 2026-04-13 01:00:23.894552 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 01:00:23.894560 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 01:00:23.894568 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 01:00:23.894576 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 01:00:23.894584 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 01:00:23.894592 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 01:00:23.894600 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-04-13 01:00:23.894608 | orchestrator | 2026-04-13 01:00:23.894622 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-04-13 01:00:23.894630 | orchestrator | Monday 13 April 2026 00:59:52 +0000 (0:00:24.320) 0:01:38.842 ********** 2026-04-13 01:00:23.894638 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 01:00:23.894646 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 01:00:23.894654 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 01:00:23.894662 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 01:00:23.894670 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 01:00:23.894678 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 01:00:23.894686 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-13 01:00:23.894693 | orchestrator | 2026-04-13 01:00:23.894701 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-04-13 01:00:23.894709 | orchestrator | Monday 13 April 2026 01:00:04 +0000 (0:00:12.040) 0:01:50.883 ********** 2026-04-13 01:00:23.894717 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 01:00:23.894726 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-13 01:00:23.894734 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-13 01:00:23.894742 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 01:00:23.894750 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-13 01:00:23.894758 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-13 01:00:23.894766 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 01:00:23.894774 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-13 01:00:23.894782 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-13 01:00:23.894796 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 01:00:23.894810 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-13 01:00:23.894824 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-13 01:00:23.894849 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 01:00:23.894862 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-13 01:00:23.894883 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-13 01:00:23.894897 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-13 01:00:23.894911 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-13 01:00:23.894926 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-13 01:00:23.894940 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-04-13 01:00:23.894948 | orchestrator | 2026-04-13 01:00:23.894956 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 01:00:23.894964 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-13 01:00:23.894973 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-13 01:00:23.894981 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-13 01:00:23.894989 | orchestrator | 2026-04-13 01:00:23.894997 | orchestrator | 2026-04-13 01:00:23.895013 | orchestrator | 2026-04-13 01:00:23.895021 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 01:00:23.895028 | orchestrator | Monday 13 April 2026 01:00:22 +0000 (0:00:17.853) 0:02:08.736 ********** 2026-04-13 01:00:23.895041 | orchestrator | =============================================================================== 2026-04-13 01:00:23.895049 | orchestrator | create openstack pool(s) ----------------------------------------------- 43.57s 2026-04-13 01:00:23.895057 | orchestrator | generate keys ---------------------------------------------------------- 24.32s 2026-04-13 01:00:23.895065 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.85s 2026-04-13 01:00:23.895072 | orchestrator | get keys from monitors ------------------------------------------------- 12.04s 2026-04-13 01:00:23.895080 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.07s 2026-04-13 01:00:23.895088 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.02s 2026-04-13 01:00:23.895096 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.75s 2026-04-13 01:00:23.895104 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.06s 2026-04-13 01:00:23.895111 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.99s 2026-04-13 01:00:23.895119 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.85s 2026-04-13 01:00:23.895127 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.83s 2026-04-13 01:00:23.895135 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.80s 2026-04-13 01:00:23.895143 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.76s 2026-04-13 01:00:23.895150 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.71s 2026-04-13 01:00:23.895158 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.67s 2026-04-13 01:00:23.895166 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.66s 2026-04-13 01:00:23.895173 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.64s 2026-04-13 01:00:23.895181 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.62s 2026-04-13 01:00:23.895189 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.58s 2026-04-13 01:00:23.895197 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.55s 2026-04-13 01:00:23.895205 | orchestrator | 2026-04-13 01:00:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:00:23.895213 | orchestrator | 2026-04-13 01:00:23 | INFO  | Task 7ced85e6-e4de-4d8a-bab9-863ed32e807f is in state STARTED 2026-04-13 01:00:23.895221 | orchestrator | 2026-04-13 01:00:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:26.946688 | orchestrator | 2026-04-13 01:00:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:00:26.948145 | orchestrator | 2026-04-13 01:00:26 | INFO  | Task 7ced85e6-e4de-4d8a-bab9-863ed32e807f is in state STARTED 2026-04-13 01:00:26.948376 | orchestrator | 2026-04-13 01:00:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:29.986971 | orchestrator | 2026-04-13 01:00:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:00:29.988037 | orchestrator | 2026-04-13 01:00:29 | INFO  | Task 7ced85e6-e4de-4d8a-bab9-863ed32e807f is in state STARTED 2026-04-13 01:00:29.988073 | orchestrator | 2026-04-13 01:00:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:33.045402 | orchestrator | 2026-04-13 01:00:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:00:33.047679 | orchestrator | 2026-04-13 01:00:33 | INFO  | Task 7ced85e6-e4de-4d8a-bab9-863ed32e807f is in state STARTED 2026-04-13 01:00:33.047754 | orchestrator | 2026-04-13 01:00:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:36.097516 | orchestrator | 2026-04-13 01:00:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:00:36.100875 | orchestrator | 2026-04-13 01:00:36 | INFO  | Task 7ced85e6-e4de-4d8a-bab9-863ed32e807f is in state STARTED 2026-04-13 01:00:36.100985 | orchestrator | 2026-04-13 01:00:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:39.157280 | orchestrator | 2026-04-13 01:00:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:00:39.159937 | orchestrator | 2026-04-13 01:00:39 | INFO  | Task 7ced85e6-e4de-4d8a-bab9-863ed32e807f is in state STARTED 2026-04-13 01:00:39.159993 | orchestrator | 2026-04-13 01:00:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:42.217986 | orchestrator | 2026-04-13 01:00:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:00:42.218351 | orchestrator | 2026-04-13 01:00:42 | INFO  | Task 7ced85e6-e4de-4d8a-bab9-863ed32e807f is in state STARTED 2026-04-13 01:00:42.218798 | orchestrator | 2026-04-13 01:00:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:45.272159 | orchestrator | 2026-04-13 01:00:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:00:45.273093 | orchestrator | 2026-04-13 01:00:45 | INFO  | Task 7ced85e6-e4de-4d8a-bab9-863ed32e807f is in state STARTED 2026-04-13 01:00:45.273394 | orchestrator | 2026-04-13 01:00:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:48.326946 | orchestrator | 2026-04-13 01:00:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:00:48.328170 | orchestrator | 2026-04-13 01:00:48 | INFO  | Task 7ced85e6-e4de-4d8a-bab9-863ed32e807f is in state STARTED 2026-04-13 01:00:48.328268 | orchestrator | 2026-04-13 01:00:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:51.376911 | orchestrator | 2026-04-13 01:00:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:00:51.379399 | orchestrator | 2026-04-13 01:00:51 | INFO  | Task 7ced85e6-e4de-4d8a-bab9-863ed32e807f is in state STARTED 2026-04-13 01:00:51.379454 | orchestrator | 2026-04-13 01:00:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:54.431991 | orchestrator | 2026-04-13 01:00:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:00:54.432911 | orchestrator | 2026-04-13 01:00:54 | INFO  | Task 7ced85e6-e4de-4d8a-bab9-863ed32e807f is in state STARTED 2026-04-13 01:00:54.432944 | orchestrator | 2026-04-13 01:00:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:00:57.494477 | orchestrator | 2026-04-13 01:00:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:00:57.495557 | orchestrator | 2026-04-13 01:00:57 | INFO  | Task 7ced85e6-e4de-4d8a-bab9-863ed32e807f is in state STARTED 2026-04-13 01:00:57.495588 | orchestrator | 2026-04-13 01:00:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:00.546942 | orchestrator | 2026-04-13 01:01:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:01:00.550477 | orchestrator | 2026-04-13 01:01:00 | INFO  | Task 7ced85e6-e4de-4d8a-bab9-863ed32e807f is in state STARTED 2026-04-13 01:01:00.551493 | orchestrator | 2026-04-13 01:01:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:03.602175 | orchestrator | 2026-04-13 01:01:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:01:03.603124 | orchestrator | 2026-04-13 01:01:03 | INFO  | Task 7ced85e6-e4de-4d8a-bab9-863ed32e807f is in state SUCCESS 2026-04-13 01:01:03.604206 | orchestrator | 2026-04-13 01:01:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:06.670076 | orchestrator | 2026-04-13 01:01:06 | INFO  | Task e3e2f9f3-2613-4db7-9e9c-c1e9535d1f30 is in state STARTED 2026-04-13 01:01:06.670799 | orchestrator | 2026-04-13 01:01:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:01:06.670903 | orchestrator | 2026-04-13 01:01:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:09.719321 | orchestrator | 2026-04-13 01:01:09 | INFO  | Task e3e2f9f3-2613-4db7-9e9c-c1e9535d1f30 is in state STARTED 2026-04-13 01:01:09.720863 | orchestrator | 2026-04-13 01:01:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:01:09.720915 | orchestrator | 2026-04-13 01:01:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:12.767579 | orchestrator | 2026-04-13 01:01:12 | INFO  | Task e3e2f9f3-2613-4db7-9e9c-c1e9535d1f30 is in state STARTED 2026-04-13 01:01:12.769990 | orchestrator | 2026-04-13 01:01:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:01:12.770290 | orchestrator | 2026-04-13 01:01:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:15.827932 | orchestrator | 2026-04-13 01:01:15 | INFO  | Task e3e2f9f3-2613-4db7-9e9c-c1e9535d1f30 is in state STARTED 2026-04-13 01:01:15.831540 | orchestrator | 2026-04-13 01:01:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:01:15.832180 | orchestrator | 2026-04-13 01:01:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:18.889789 | orchestrator | 2026-04-13 01:01:18 | INFO  | Task e3e2f9f3-2613-4db7-9e9c-c1e9535d1f30 is in state STARTED 2026-04-13 01:01:18.895683 | orchestrator | 2026-04-13 01:01:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:01:18.895754 | orchestrator | 2026-04-13 01:01:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:21.934927 | orchestrator | 2026-04-13 01:01:21 | INFO  | Task e3e2f9f3-2613-4db7-9e9c-c1e9535d1f30 is in state STARTED 2026-04-13 01:01:21.935181 | orchestrator | 2026-04-13 01:01:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:01:21.935277 | orchestrator | 2026-04-13 01:01:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:24.987174 | orchestrator | 2026-04-13 01:01:24 | INFO  | Task e3e2f9f3-2613-4db7-9e9c-c1e9535d1f30 is in state STARTED 2026-04-13 01:01:24.988475 | orchestrator | 2026-04-13 01:01:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:01:24.988511 | orchestrator | 2026-04-13 01:01:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:28.048427 | orchestrator | 2026-04-13 01:01:28 | INFO  | Task e3e2f9f3-2613-4db7-9e9c-c1e9535d1f30 is in state STARTED 2026-04-13 01:01:28.048522 | orchestrator | 2026-04-13 01:01:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:01:28.048536 | orchestrator | 2026-04-13 01:01:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:31.103742 | orchestrator | 2026-04-13 01:01:31 | INFO  | Task e3e2f9f3-2613-4db7-9e9c-c1e9535d1f30 is in state STARTED 2026-04-13 01:01:31.104621 | orchestrator | 2026-04-13 01:01:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:01:31.104682 | orchestrator | 2026-04-13 01:01:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:34.159109 | orchestrator | 2026-04-13 01:01:34 | INFO  | Task e3e2f9f3-2613-4db7-9e9c-c1e9535d1f30 is in state STARTED 2026-04-13 01:01:34.159866 | orchestrator | 2026-04-13 01:01:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:01:34.160115 | orchestrator | 2026-04-13 01:01:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:37.211472 | orchestrator | 2026-04-13 01:01:37 | INFO  | Task e3e2f9f3-2613-4db7-9e9c-c1e9535d1f30 is in state STARTED 2026-04-13 01:01:37.213541 | orchestrator | 2026-04-13 01:01:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:01:37.213601 | orchestrator | 2026-04-13 01:01:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:40.265583 | orchestrator | 2026-04-13 01:01:40 | INFO  | Task e3e2f9f3-2613-4db7-9e9c-c1e9535d1f30 is in state STARTED 2026-04-13 01:01:40.267301 | orchestrator | 2026-04-13 01:01:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:01:40.267342 | orchestrator | 2026-04-13 01:01:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:43.320552 | orchestrator | 2026-04-13 01:01:43 | INFO  | Task e3e2f9f3-2613-4db7-9e9c-c1e9535d1f30 is in state STARTED 2026-04-13 01:01:43.322674 | orchestrator | 2026-04-13 01:01:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:01:43.323001 | orchestrator | 2026-04-13 01:01:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:46.375960 | orchestrator | 2026-04-13 01:01:46 | INFO  | Task e3e2f9f3-2613-4db7-9e9c-c1e9535d1f30 is in state STARTED 2026-04-13 01:01:46.378107 | orchestrator | 2026-04-13 01:01:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:01:46.378156 | orchestrator | 2026-04-13 01:01:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:49.427087 | orchestrator | 2026-04-13 01:01:49 | INFO  | Task e3e2f9f3-2613-4db7-9e9c-c1e9535d1f30 is in state STARTED 2026-04-13 01:01:49.429173 | orchestrator | 2026-04-13 01:01:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:01:49.429384 | orchestrator | 2026-04-13 01:01:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:52.491475 | orchestrator | 2026-04-13 01:01:52 | INFO  | Task e3e2f9f3-2613-4db7-9e9c-c1e9535d1f30 is in state STARTED 2026-04-13 01:01:52.492902 | orchestrator | 2026-04-13 01:01:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:01:52.492950 | orchestrator | 2026-04-13 01:01:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:55.547365 | orchestrator | 2026-04-13 01:01:55 | INFO  | Task e3e2f9f3-2613-4db7-9e9c-c1e9535d1f30 is in state STARTED 2026-04-13 01:01:55.548773 | orchestrator | 2026-04-13 01:01:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:01:55.548820 | orchestrator | 2026-04-13 01:01:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:01:58.586978 | orchestrator | 2026-04-13 01:01:58 | INFO  | Task e3e2f9f3-2613-4db7-9e9c-c1e9535d1f30 is in state STARTED 2026-04-13 01:01:58.589719 | orchestrator | 2026-04-13 01:01:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:01:58.589828 | orchestrator | 2026-04-13 01:01:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:01.636812 | orchestrator | 2026-04-13 01:02:01 | INFO  | Task e3e2f9f3-2613-4db7-9e9c-c1e9535d1f30 is in state STARTED 2026-04-13 01:02:01.637880 | orchestrator | 2026-04-13 01:02:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:02:01.637950 | orchestrator | 2026-04-13 01:02:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:04.687811 | orchestrator | 2026-04-13 01:02:04 | INFO  | Task e3e2f9f3-2613-4db7-9e9c-c1e9535d1f30 is in state STARTED 2026-04-13 01:02:04.690326 | orchestrator | 2026-04-13 01:02:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:02:04.690370 | orchestrator | 2026-04-13 01:02:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:07.754336 | orchestrator | 2026-04-13 01:02:07.754443 | orchestrator | 2026-04-13 01:02:07.754461 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-04-13 01:02:07.754474 | orchestrator | 2026-04-13 01:02:07.754486 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-04-13 01:02:07.754499 | orchestrator | Monday 13 April 2026 01:00:26 +0000 (0:00:00.292) 0:00:00.292 ********** 2026-04-13 01:02:07.754511 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-13 01:02:07.754524 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-13 01:02:07.754535 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-13 01:02:07.754547 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-13 01:02:07.754557 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-13 01:02:07.754568 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-13 01:02:07.754579 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-13 01:02:07.754591 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-13 01:02:07.754602 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-13 01:02:07.754612 | orchestrator | 2026-04-13 01:02:07.754623 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-04-13 01:02:07.754635 | orchestrator | Monday 13 April 2026 01:00:31 +0000 (0:00:04.890) 0:00:05.182 ********** 2026-04-13 01:02:07.754647 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-13 01:02:07.754659 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-13 01:02:07.754670 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-13 01:02:07.754683 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-13 01:02:07.754695 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-13 01:02:07.754707 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-13 01:02:07.754719 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-13 01:02:07.754890 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-13 01:02:07.754907 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-13 01:02:07.754921 | orchestrator | 2026-04-13 01:02:07.754935 | orchestrator | TASK [Create share directory] ************************************************** 2026-04-13 01:02:07.754948 | orchestrator | Monday 13 April 2026 01:00:35 +0000 (0:00:04.122) 0:00:09.305 ********** 2026-04-13 01:02:07.754964 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-13 01:02:07.754977 | orchestrator | 2026-04-13 01:02:07.754992 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-04-13 01:02:07.755030 | orchestrator | Monday 13 April 2026 01:00:36 +0000 (0:00:01.122) 0:00:10.427 ********** 2026-04-13 01:02:07.755042 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-04-13 01:02:07.755054 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-13 01:02:07.755064 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-13 01:02:07.755075 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-04-13 01:02:07.755084 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-13 01:02:07.755095 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-04-13 01:02:07.755105 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-04-13 01:02:07.755131 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-04-13 01:02:07.755143 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-04-13 01:02:07.755153 | orchestrator | 2026-04-13 01:02:07.755163 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-04-13 01:02:07.755173 | orchestrator | Monday 13 April 2026 01:00:51 +0000 (0:00:14.964) 0:00:25.392 ********** 2026-04-13 01:02:07.755208 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-04-13 01:02:07.755220 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-04-13 01:02:07.755232 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-13 01:02:07.755243 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-13 01:02:07.755277 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-13 01:02:07.755289 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-13 01:02:07.755300 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-04-13 01:02:07.755311 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-04-13 01:02:07.755321 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-04-13 01:02:07.755332 | orchestrator | 2026-04-13 01:02:07.755342 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-04-13 01:02:07.755352 | orchestrator | Monday 13 April 2026 01:00:54 +0000 (0:00:03.571) 0:00:28.963 ********** 2026-04-13 01:02:07.755362 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-04-13 01:02:07.755372 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-13 01:02:07.755382 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-13 01:02:07.755392 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-04-13 01:02:07.755401 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-13 01:02:07.755411 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-04-13 01:02:07.755421 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-04-13 01:02:07.755431 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-04-13 01:02:07.755440 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-04-13 01:02:07.755451 | orchestrator | 2026-04-13 01:02:07.755461 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 01:02:07.755472 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 01:02:07.755496 | orchestrator | 2026-04-13 01:02:07.755506 | orchestrator | 2026-04-13 01:02:07.755517 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 01:02:07.755529 | orchestrator | Monday 13 April 2026 01:01:02 +0000 (0:00:07.345) 0:00:36.309 ********** 2026-04-13 01:02:07.755538 | orchestrator | =============================================================================== 2026-04-13 01:02:07.755549 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.96s 2026-04-13 01:02:07.755560 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.35s 2026-04-13 01:02:07.755570 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.89s 2026-04-13 01:02:07.755580 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.12s 2026-04-13 01:02:07.755591 | orchestrator | Check if target directories exist --------------------------------------- 3.57s 2026-04-13 01:02:07.755601 | orchestrator | Create share directory -------------------------------------------------- 1.12s 2026-04-13 01:02:07.755611 | orchestrator | 2026-04-13 01:02:07.755622 | orchestrator | 2026-04-13 01:02:07.755632 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-04-13 01:02:07.755644 | orchestrator | 2026-04-13 01:02:07.755651 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-04-13 01:02:07.755657 | orchestrator | Monday 13 April 2026 01:01:06 +0000 (0:00:00.317) 0:00:00.317 ********** 2026-04-13 01:02:07.755663 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-04-13 01:02:07.755670 | orchestrator | 2026-04-13 01:02:07.755677 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-04-13 01:02:07.755683 | orchestrator | Monday 13 April 2026 01:01:06 +0000 (0:00:00.243) 0:00:00.561 ********** 2026-04-13 01:02:07.755689 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-04-13 01:02:07.755695 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-04-13 01:02:07.755701 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-04-13 01:02:07.755708 | orchestrator | 2026-04-13 01:02:07.755714 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-04-13 01:02:07.755720 | orchestrator | Monday 13 April 2026 01:01:08 +0000 (0:00:01.738) 0:00:02.299 ********** 2026-04-13 01:02:07.755726 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-04-13 01:02:07.755732 | orchestrator | 2026-04-13 01:02:07.755745 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-04-13 01:02:07.755751 | orchestrator | Monday 13 April 2026 01:01:09 +0000 (0:00:01.272) 0:00:03.571 ********** 2026-04-13 01:02:07.755758 | orchestrator | changed: [testbed-manager] 2026-04-13 01:02:07.755764 | orchestrator | 2026-04-13 01:02:07.755770 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-04-13 01:02:07.755776 | orchestrator | Monday 13 April 2026 01:01:10 +0000 (0:00:01.010) 0:00:04.582 ********** 2026-04-13 01:02:07.755782 | orchestrator | changed: [testbed-manager] 2026-04-13 01:02:07.755788 | orchestrator | 2026-04-13 01:02:07.755794 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-04-13 01:02:07.755800 | orchestrator | Monday 13 April 2026 01:01:11 +0000 (0:00:00.951) 0:00:05.534 ********** 2026-04-13 01:02:07.755806 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-04-13 01:02:07.755813 | orchestrator | ok: [testbed-manager] 2026-04-13 01:02:07.755819 | orchestrator | 2026-04-13 01:02:07.755825 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-04-13 01:02:07.755840 | orchestrator | Monday 13 April 2026 01:01:54 +0000 (0:00:43.055) 0:00:48.589 ********** 2026-04-13 01:02:07.755847 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-04-13 01:02:07.755854 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-04-13 01:02:07.755868 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-04-13 01:02:07.755874 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-04-13 01:02:07.755880 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-04-13 01:02:07.755886 | orchestrator | 2026-04-13 01:02:07.755893 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-04-13 01:02:07.755899 | orchestrator | Monday 13 April 2026 01:01:59 +0000 (0:00:04.531) 0:00:53.121 ********** 2026-04-13 01:02:07.755905 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-04-13 01:02:07.755911 | orchestrator | 2026-04-13 01:02:07.755917 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-04-13 01:02:07.755923 | orchestrator | Monday 13 April 2026 01:01:59 +0000 (0:00:00.695) 0:00:53.816 ********** 2026-04-13 01:02:07.755929 | orchestrator | skipping: [testbed-manager] 2026-04-13 01:02:07.755935 | orchestrator | 2026-04-13 01:02:07.755942 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-04-13 01:02:07.755948 | orchestrator | Monday 13 April 2026 01:01:59 +0000 (0:00:00.139) 0:00:53.956 ********** 2026-04-13 01:02:07.755954 | orchestrator | skipping: [testbed-manager] 2026-04-13 01:02:07.755959 | orchestrator | 2026-04-13 01:02:07.755964 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-04-13 01:02:07.755970 | orchestrator | Monday 13 April 2026 01:02:00 +0000 (0:00:00.313) 0:00:54.269 ********** 2026-04-13 01:02:07.755975 | orchestrator | changed: [testbed-manager] 2026-04-13 01:02:07.755980 | orchestrator | 2026-04-13 01:02:07.755986 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-04-13 01:02:07.755991 | orchestrator | Monday 13 April 2026 01:02:01 +0000 (0:00:01.551) 0:00:55.820 ********** 2026-04-13 01:02:07.755997 | orchestrator | changed: [testbed-manager] 2026-04-13 01:02:07.756002 | orchestrator | 2026-04-13 01:02:07.756007 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-04-13 01:02:07.756013 | orchestrator | Monday 13 April 2026 01:02:02 +0000 (0:00:00.741) 0:00:56.562 ********** 2026-04-13 01:02:07.756018 | orchestrator | changed: [testbed-manager] 2026-04-13 01:02:07.756023 | orchestrator | 2026-04-13 01:02:07.756029 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-04-13 01:02:07.756034 | orchestrator | Monday 13 April 2026 01:02:03 +0000 (0:00:00.610) 0:00:57.173 ********** 2026-04-13 01:02:07.756039 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-04-13 01:02:07.756045 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-04-13 01:02:07.756050 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-04-13 01:02:07.756056 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-04-13 01:02:07.756061 | orchestrator | 2026-04-13 01:02:07.756066 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 01:02:07.756072 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-13 01:02:07.756077 | orchestrator | 2026-04-13 01:02:07.756083 | orchestrator | 2026-04-13 01:02:07.756088 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 01:02:07.756093 | orchestrator | Monday 13 April 2026 01:02:04 +0000 (0:00:01.618) 0:00:58.791 ********** 2026-04-13 01:02:07.756099 | orchestrator | =============================================================================== 2026-04-13 01:02:07.756104 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 43.06s 2026-04-13 01:02:07.756109 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.53s 2026-04-13 01:02:07.756115 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.74s 2026-04-13 01:02:07.756120 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.62s 2026-04-13 01:02:07.756125 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.55s 2026-04-13 01:02:07.756131 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.27s 2026-04-13 01:02:07.756141 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.01s 2026-04-13 01:02:07.756146 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.95s 2026-04-13 01:02:07.756151 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.74s 2026-04-13 01:02:07.756157 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.70s 2026-04-13 01:02:07.756162 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.61s 2026-04-13 01:02:07.756171 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.31s 2026-04-13 01:02:07.756176 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.24s 2026-04-13 01:02:07.756207 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2026-04-13 01:02:07.756214 | orchestrator | 2026-04-13 01:02:07 | INFO  | Task e3e2f9f3-2613-4db7-9e9c-c1e9535d1f30 is in state SUCCESS 2026-04-13 01:02:07.756219 | orchestrator | 2026-04-13 01:02:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:02:07.756939 | orchestrator | 2026-04-13 01:02:07 | INFO  | Task 7b13dc03-4971-48ae-9463-095d5cf916a4 is in state STARTED 2026-04-13 01:02:07.758845 | orchestrator | 2026-04-13 01:02:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:02:07.760374 | orchestrator | 2026-04-13 01:02:07 | INFO  | Task 0f89b1d4-083e-43a1-925b-60561b5bbf5b is in state STARTED 2026-04-13 01:02:07.760412 | orchestrator | 2026-04-13 01:02:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:10.810502 | orchestrator | 2026-04-13 01:02:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:02:10.813451 | orchestrator | 2026-04-13 01:02:10 | INFO  | Task 7b13dc03-4971-48ae-9463-095d5cf916a4 is in state STARTED 2026-04-13 01:02:10.816030 | orchestrator | 2026-04-13 01:02:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:02:10.817386 | orchestrator | 2026-04-13 01:02:10 | INFO  | Task 0f89b1d4-083e-43a1-925b-60561b5bbf5b is in state STARTED 2026-04-13 01:02:10.818105 | orchestrator | 2026-04-13 01:02:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:13.920763 | orchestrator | 2026-04-13 01:02:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:02:13.920867 | orchestrator | 2026-04-13 01:02:13 | INFO  | Task 7b13dc03-4971-48ae-9463-095d5cf916a4 is in state STARTED 2026-04-13 01:02:13.920882 | orchestrator | 2026-04-13 01:02:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:02:13.920894 | orchestrator | 2026-04-13 01:02:13 | INFO  | Task 0f89b1d4-083e-43a1-925b-60561b5bbf5b is in state STARTED 2026-04-13 01:02:13.920906 | orchestrator | 2026-04-13 01:02:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:16.922292 | orchestrator | 2026-04-13 01:02:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:02:16.924593 | orchestrator | 2026-04-13 01:02:16 | INFO  | Task 7b13dc03-4971-48ae-9463-095d5cf916a4 is in state STARTED 2026-04-13 01:02:16.933046 | orchestrator | 2026-04-13 01:02:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:02:16.934759 | orchestrator | 2026-04-13 01:02:16 | INFO  | Task 0f89b1d4-083e-43a1-925b-60561b5bbf5b is in state STARTED 2026-04-13 01:02:16.934870 | orchestrator | 2026-04-13 01:02:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:19.980926 | orchestrator | 2026-04-13 01:02:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:02:19.982376 | orchestrator | 2026-04-13 01:02:19 | INFO  | Task 7b13dc03-4971-48ae-9463-095d5cf916a4 is in state STARTED 2026-04-13 01:02:19.985412 | orchestrator | 2026-04-13 01:02:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:02:19.986635 | orchestrator | 2026-04-13 01:02:19 | INFO  | Task 0f89b1d4-083e-43a1-925b-60561b5bbf5b is in state STARTED 2026-04-13 01:02:19.986701 | orchestrator | 2026-04-13 01:02:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:23.026889 | orchestrator | 2026-04-13 01:02:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:02:23.028367 | orchestrator | 2026-04-13 01:02:23 | INFO  | Task 7b13dc03-4971-48ae-9463-095d5cf916a4 is in state STARTED 2026-04-13 01:02:23.030353 | orchestrator | 2026-04-13 01:02:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:02:23.032876 | orchestrator | 2026-04-13 01:02:23 | INFO  | Task 0f89b1d4-083e-43a1-925b-60561b5bbf5b is in state STARTED 2026-04-13 01:02:23.032963 | orchestrator | 2026-04-13 01:02:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:26.057807 | orchestrator | 2026-04-13 01:02:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:02:26.057891 | orchestrator | 2026-04-13 01:02:26 | INFO  | Task 7b13dc03-4971-48ae-9463-095d5cf916a4 is in state STARTED 2026-04-13 01:02:26.058674 | orchestrator | 2026-04-13 01:02:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:02:26.059216 | orchestrator | 2026-04-13 01:02:26 | INFO  | Task 0f89b1d4-083e-43a1-925b-60561b5bbf5b is in state STARTED 2026-04-13 01:02:26.059263 | orchestrator | 2026-04-13 01:02:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:29.102618 | orchestrator | 2026-04-13 01:02:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:02:29.103805 | orchestrator | 2026-04-13 01:02:29 | INFO  | Task 7b13dc03-4971-48ae-9463-095d5cf916a4 is in state STARTED 2026-04-13 01:02:29.105766 | orchestrator | 2026-04-13 01:02:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:02:29.107934 | orchestrator | 2026-04-13 01:02:29 | INFO  | Task 0f89b1d4-083e-43a1-925b-60561b5bbf5b is in state STARTED 2026-04-13 01:02:29.108129 | orchestrator | 2026-04-13 01:02:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:32.155922 | orchestrator | 2026-04-13 01:02:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:02:32.156001 | orchestrator | 2026-04-13 01:02:32 | INFO  | Task 7b13dc03-4971-48ae-9463-095d5cf916a4 is in state STARTED 2026-04-13 01:02:32.157021 | orchestrator | 2026-04-13 01:02:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:02:32.158418 | orchestrator | 2026-04-13 01:02:32 | INFO  | Task 0f89b1d4-083e-43a1-925b-60561b5bbf5b is in state STARTED 2026-04-13 01:02:32.158451 | orchestrator | 2026-04-13 01:02:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:35.203432 | orchestrator | 2026-04-13 01:02:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:02:35.211183 | orchestrator | 2026-04-13 01:02:35 | INFO  | Task 7b13dc03-4971-48ae-9463-095d5cf916a4 is in state STARTED 2026-04-13 01:02:35.212279 | orchestrator | 2026-04-13 01:02:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:02:35.213341 | orchestrator | 2026-04-13 01:02:35 | INFO  | Task 0f89b1d4-083e-43a1-925b-60561b5bbf5b is in state STARTED 2026-04-13 01:02:35.213367 | orchestrator | 2026-04-13 01:02:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:38.256227 | orchestrator | 2026-04-13 01:02:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:02:38.257260 | orchestrator | 2026-04-13 01:02:38 | INFO  | Task 7b13dc03-4971-48ae-9463-095d5cf916a4 is in state STARTED 2026-04-13 01:02:38.258398 | orchestrator | 2026-04-13 01:02:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:02:38.259083 | orchestrator | 2026-04-13 01:02:38 | INFO  | Task 0f89b1d4-083e-43a1-925b-60561b5bbf5b is in state STARTED 2026-04-13 01:02:38.259131 | orchestrator | 2026-04-13 01:02:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:41.312281 | orchestrator | 2026-04-13 01:02:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:02:41.312825 | orchestrator | 2026-04-13 01:02:41 | INFO  | Task 7b13dc03-4971-48ae-9463-095d5cf916a4 is in state STARTED 2026-04-13 01:02:41.314292 | orchestrator | 2026-04-13 01:02:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:02:41.314964 | orchestrator | 2026-04-13 01:02:41 | INFO  | Task 0f89b1d4-083e-43a1-925b-60561b5bbf5b is in state STARTED 2026-04-13 01:02:41.315001 | orchestrator | 2026-04-13 01:02:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:44.365253 | orchestrator | 2026-04-13 01:02:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:02:44.367612 | orchestrator | 2026-04-13 01:02:44 | INFO  | Task 7b13dc03-4971-48ae-9463-095d5cf916a4 is in state STARTED 2026-04-13 01:02:44.369827 | orchestrator | 2026-04-13 01:02:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:02:44.374403 | orchestrator | 2026-04-13 01:02:44 | INFO  | Task 0f89b1d4-083e-43a1-925b-60561b5bbf5b is in state STARTED 2026-04-13 01:02:44.374453 | orchestrator | 2026-04-13 01:02:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:47.431882 | orchestrator | 2026-04-13 01:02:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:02:47.433334 | orchestrator | 2026-04-13 01:02:47 | INFO  | Task 7b13dc03-4971-48ae-9463-095d5cf916a4 is in state STARTED 2026-04-13 01:02:47.435441 | orchestrator | 2026-04-13 01:02:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:02:47.439060 | orchestrator | 2026-04-13 01:02:47 | INFO  | Task 0f89b1d4-083e-43a1-925b-60561b5bbf5b is in state STARTED 2026-04-13 01:02:47.439100 | orchestrator | 2026-04-13 01:02:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:50.494785 | orchestrator | 2026-04-13 01:02:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:02:50.495406 | orchestrator | 2026-04-13 01:02:50 | INFO  | Task 7b13dc03-4971-48ae-9463-095d5cf916a4 is in state STARTED 2026-04-13 01:02:50.496830 | orchestrator | 2026-04-13 01:02:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:02:50.498274 | orchestrator | 2026-04-13 01:02:50 | INFO  | Task 0f89b1d4-083e-43a1-925b-60561b5bbf5b is in state STARTED 2026-04-13 01:02:50.498380 | orchestrator | 2026-04-13 01:02:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:53.571874 | orchestrator | 2026-04-13 01:02:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:02:53.574414 | orchestrator | 2026-04-13 01:02:53 | INFO  | Task 7b13dc03-4971-48ae-9463-095d5cf916a4 is in state STARTED 2026-04-13 01:02:53.576715 | orchestrator | 2026-04-13 01:02:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:02:53.580188 | orchestrator | 2026-04-13 01:02:53 | INFO  | Task 0f89b1d4-083e-43a1-925b-60561b5bbf5b is in state STARTED 2026-04-13 01:02:53.580224 | orchestrator | 2026-04-13 01:02:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:56.626431 | orchestrator | 2026-04-13 01:02:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:02:56.627757 | orchestrator | 2026-04-13 01:02:56 | INFO  | Task 7b13dc03-4971-48ae-9463-095d5cf916a4 is in state STARTED 2026-04-13 01:02:56.629646 | orchestrator | 2026-04-13 01:02:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:02:56.631731 | orchestrator | 2026-04-13 01:02:56 | INFO  | Task 0f89b1d4-083e-43a1-925b-60561b5bbf5b is in state STARTED 2026-04-13 01:02:56.631768 | orchestrator | 2026-04-13 01:02:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:02:59.687670 | orchestrator | 2026-04-13 01:02:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:02:59.690359 | orchestrator | 2026-04-13 01:02:59 | INFO  | Task 7b13dc03-4971-48ae-9463-095d5cf916a4 is in state STARTED 2026-04-13 01:02:59.693006 | orchestrator | 2026-04-13 01:02:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:02:59.697480 | orchestrator | 2026-04-13 01:02:59 | INFO  | Task 0f89b1d4-083e-43a1-925b-60561b5bbf5b is in state STARTED 2026-04-13 01:02:59.697806 | orchestrator | 2026-04-13 01:02:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:02.749802 | orchestrator | 2026-04-13 01:03:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:03:02.751583 | orchestrator | 2026-04-13 01:03:02 | INFO  | Task 7b13dc03-4971-48ae-9463-095d5cf916a4 is in state STARTED 2026-04-13 01:03:02.757111 | orchestrator | 2026-04-13 01:03:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:03:02.759108 | orchestrator | 2026-04-13 01:03:02 | INFO  | Task 0f89b1d4-083e-43a1-925b-60561b5bbf5b is in state STARTED 2026-04-13 01:03:02.759141 | orchestrator | 2026-04-13 01:03:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:05.800817 | orchestrator | 2026-04-13 01:03:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:03:05.802090 | orchestrator | 2026-04-13 01:03:05 | INFO  | Task 7b13dc03-4971-48ae-9463-095d5cf916a4 is in state STARTED 2026-04-13 01:03:05.803228 | orchestrator | 2026-04-13 01:03:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:03:05.804921 | orchestrator | 2026-04-13 01:03:05 | INFO  | Task 0f89b1d4-083e-43a1-925b-60561b5bbf5b is in state STARTED 2026-04-13 01:03:05.804971 | orchestrator | 2026-04-13 01:03:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:08.865317 | orchestrator | 2026-04-13 01:03:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:03:08.866613 | orchestrator | 2026-04-13 01:03:08 | INFO  | Task 7b13dc03-4971-48ae-9463-095d5cf916a4 is in state STARTED 2026-04-13 01:03:08.868068 | orchestrator | 2026-04-13 01:03:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:03:08.869361 | orchestrator | 2026-04-13 01:03:08 | INFO  | Task 0f89b1d4-083e-43a1-925b-60561b5bbf5b is in state STARTED 2026-04-13 01:03:08.869469 | orchestrator | 2026-04-13 01:03:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:11.924983 | orchestrator | 2026-04-13 01:03:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:03:11.925971 | orchestrator | 2026-04-13 01:03:11 | INFO  | Task 7b13dc03-4971-48ae-9463-095d5cf916a4 is in state STARTED 2026-04-13 01:03:11.927702 | orchestrator | 2026-04-13 01:03:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:03:11.927986 | orchestrator | 2026-04-13 01:03:11 | INFO  | Task 0f89b1d4-083e-43a1-925b-60561b5bbf5b is in state STARTED 2026-04-13 01:03:11.928019 | orchestrator | 2026-04-13 01:03:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:14.980133 | orchestrator | 2026-04-13 01:03:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:03:14.983289 | orchestrator | 2026-04-13 01:03:14 | INFO  | Task 7b13dc03-4971-48ae-9463-095d5cf916a4 is in state STARTED 2026-04-13 01:03:14.984372 | orchestrator | 2026-04-13 01:03:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:03:14.985505 | orchestrator | 2026-04-13 01:03:14 | INFO  | Task 0f89b1d4-083e-43a1-925b-60561b5bbf5b is in state STARTED 2026-04-13 01:03:14.985658 | orchestrator | 2026-04-13 01:03:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:18.024951 | orchestrator | 2026-04-13 01:03:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:03:18.027784 | orchestrator | 2026-04-13 01:03:18 | INFO  | Task 7b13dc03-4971-48ae-9463-095d5cf916a4 is in state STARTED 2026-04-13 01:03:18.029212 | orchestrator | 2026-04-13 01:03:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:03:18.030643 | orchestrator | 2026-04-13 01:03:18 | INFO  | Task 0f89b1d4-083e-43a1-925b-60561b5bbf5b is in state STARTED 2026-04-13 01:03:18.030674 | orchestrator | 2026-04-13 01:03:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:21.101104 | orchestrator | 2026-04-13 01:03:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:03:21.101218 | orchestrator | 2026-04-13 01:03:21 | INFO  | Task 7b13dc03-4971-48ae-9463-095d5cf916a4 is in state STARTED 2026-04-13 01:03:21.101233 | orchestrator | 2026-04-13 01:03:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:03:21.101244 | orchestrator | 2026-04-13 01:03:21 | INFO  | Task 0f89b1d4-083e-43a1-925b-60561b5bbf5b is in state STARTED 2026-04-13 01:03:21.101255 | orchestrator | 2026-04-13 01:03:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:24.123617 | orchestrator | 2026-04-13 01:03:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:03:24.125339 | orchestrator | 2026-04-13 01:03:24 | INFO  | Task 7b13dc03-4971-48ae-9463-095d5cf916a4 is in state STARTED 2026-04-13 01:03:24.126975 | orchestrator | 2026-04-13 01:03:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:03:24.128372 | orchestrator | 2026-04-13 01:03:24 | INFO  | Task 0f89b1d4-083e-43a1-925b-60561b5bbf5b is in state STARTED 2026-04-13 01:03:24.128648 | orchestrator | 2026-04-13 01:03:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:27.180840 | orchestrator | 2026-04-13 01:03:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:03:27.187738 | orchestrator | 2026-04-13 01:03:27 | INFO  | Task 7b13dc03-4971-48ae-9463-095d5cf916a4 is in state SUCCESS 2026-04-13 01:03:27.188887 | orchestrator | 2026-04-13 01:03:27.188968 | orchestrator | 2026-04-13 01:03:27.188981 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 01:03:27.188991 | orchestrator | 2026-04-13 01:03:27.189000 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 01:03:27.189009 | orchestrator | Monday 13 April 2026 01:02:08 +0000 (0:00:00.384) 0:00:00.384 ********** 2026-04-13 01:03:27.189038 | orchestrator | ok: [testbed-manager] 2026-04-13 01:03:27.189048 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:03:27.189056 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:03:27.189112 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:03:27.189122 | orchestrator | ok: [testbed-node-3] 2026-04-13 01:03:27.189168 | orchestrator | ok: [testbed-node-4] 2026-04-13 01:03:27.189178 | orchestrator | ok: [testbed-node-5] 2026-04-13 01:03:27.189200 | orchestrator | 2026-04-13 01:03:27.189209 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 01:03:27.189256 | orchestrator | Monday 13 April 2026 01:02:09 +0000 (0:00:00.785) 0:00:01.170 ********** 2026-04-13 01:03:27.189266 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-04-13 01:03:27.189275 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-04-13 01:03:27.189294 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-04-13 01:03:27.189303 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-04-13 01:03:27.189312 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-04-13 01:03:27.189320 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-04-13 01:03:27.189329 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-04-13 01:03:27.189337 | orchestrator | 2026-04-13 01:03:27.189346 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-04-13 01:03:27.189356 | orchestrator | 2026-04-13 01:03:27.189371 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-13 01:03:27.189385 | orchestrator | Monday 13 April 2026 01:02:10 +0000 (0:00:01.265) 0:00:02.435 ********** 2026-04-13 01:03:27.189401 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 01:03:27.189457 | orchestrator | 2026-04-13 01:03:27.189468 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-04-13 01:03:27.189480 | orchestrator | Monday 13 April 2026 01:02:12 +0000 (0:00:01.382) 0:00:03.818 ********** 2026-04-13 01:03:27.189493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:03:27.189511 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-13 01:03:27.189524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:03:27.189560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:03:27.189573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.189588 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:03:27.189599 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:03:27.189609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.189620 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:03:27.189630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.189647 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.189665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.189681 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.189693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.189704 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.189714 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:03:27.189725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.189741 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.189757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.189773 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 01:03:27.189784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.189795 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.189806 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.189821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.189831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.189846 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.189859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.189869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.189878 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.189887 | orchestrator | 2026-04-13 01:03:27.189896 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-13 01:03:27.189905 | orchestrator | Monday 13 April 2026 01:02:16 +0000 (0:00:03.749) 0:00:07.567 ********** 2026-04-13 01:03:27.189914 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-13 01:03:27.189922 | orchestrator | 2026-04-13 01:03:27.189931 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-04-13 01:03:27.189940 | orchestrator | Monday 13 April 2026 01:02:17 +0000 (0:00:01.306) 0:00:08.874 ********** 2026-04-13 01:03:27.189956 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-13 01:03:27.189981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:03:27.189998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:03:27.190065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:03:27.190079 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:03:27.190089 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:03:27.190098 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:03:27.190114 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:03:27.190124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.190160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.190171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.190197 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.190207 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.190219 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.190246 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.190263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.190283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.190296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.190317 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.190333 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.190348 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.190374 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 01:03:27.190393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.190591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.190667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.190683 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.190694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.190723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.190734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.190744 | orchestrator | 2026-04-13 01:03:27.190756 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-04-13 01:03:27.190766 | orchestrator | Monday 13 April 2026 01:02:22 +0000 (0:00:05.511) 0:00:14.386 ********** 2026-04-13 01:03:27.190777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 01:03:27.190801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:03:27.190819 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-13 01:03:27.190832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:03:27.190849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 01:03:27.190859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 01:03:27.190869 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 01:03:27.190879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:03:27.190897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:03:27.190908 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:03:27.190923 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 01:03:27.190934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:03:27.190950 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 01:03:27.190961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 01:03:27.190972 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:03:27.190982 | orchestrator | skipping: [testbed-manager] 2026-04-13 01:03:27.190999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 01:03:27.191010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:03:27.191020 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:03:27.191033 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 01:03:27.191049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:03:27.191059 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 01:03:27.191069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:03:27.191079 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 01:03:27.191089 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-13 01:03:27.191099 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:03:27.191115 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 01:03:27.191175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 01:03:27.191195 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-13 01:03:27.191207 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:03:27.191219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:03:27.191230 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:03:27.191240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 01:03:27.191250 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 01:03:27.191260 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-13 01:03:27.191270 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:03:27.191280 | orchestrator | 2026-04-13 01:03:27.191290 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-04-13 01:03:27.191300 | orchestrator | Monday 13 April 2026 01:02:24 +0000 (0:00:01.857) 0:00:16.243 ********** 2026-04-13 01:03:27.191323 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-13 01:03:27.191341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 01:03:27.191351 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 01:03:27.191361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 01:03:27.191372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:03:27.191382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:03:27.191397 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 01:03:27.191407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:03:27.191427 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 01:03:27.191439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:03:27.191449 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:03:27.191459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 01:03:27.191469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 01:03:27.191479 | orchestrator | skipping: [testbed-manager] 2026-04-13 01:03:27.191494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 01:03:27.191521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:03:27.191531 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:03:27.192105 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 01:03:27.192155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:03:27.192168 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:03:27.192177 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 01:03:27.192186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:03:27.192195 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-13 01:03:27.192203 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:03:27.192222 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 01:03:27.192231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:03:27.192253 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 01:03:27.192262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 01:03:27.192270 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-13 01:03:27.192279 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:03:27.192287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:03:27.192295 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:03:27.192303 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 01:03:27.192317 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 01:03:27.192325 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-13 01:03:27.192333 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:03:27.192341 | orchestrator | 2026-04-13 01:03:27.192349 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-04-13 01:03:27.192361 | orchestrator | Monday 13 April 2026 01:02:27 +0000 (0:00:02.305) 0:00:18.549 ********** 2026-04-13 01:03:27.192375 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-13 01:03:27.192385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:03:27.192394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:03:27.192402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:03:27.192416 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:03:27.192424 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:03:27.192436 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:03:27.192449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.192457 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:03:27.192466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.192474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.192487 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.192496 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.192504 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.192519 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.192528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.192537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.192545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.192559 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.192567 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.192575 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.192584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.192678 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 01:03:27.192696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.192705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.192719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.192727 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.192735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.192747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.192755 | orchestrator | 2026-04-13 01:03:27.192763 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-04-13 01:03:27.192777 | orchestrator | Monday 13 April 2026 01:02:32 +0000 (0:00:05.877) 0:00:24.426 ********** 2026-04-13 01:03:27.192786 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-13 01:03:27.192794 | orchestrator | 2026-04-13 01:03:27.192802 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-04-13 01:03:27.192810 | orchestrator | Monday 13 April 2026 01:02:33 +0000 (0:00:01.010) 0:00:25.437 ********** 2026-04-13 01:03:27.192818 | orchestrator | skipping: [testbed-manager] 2026-04-13 01:03:27.192826 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:03:27.192834 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:03:27.192841 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:03:27.192849 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:03:27.192857 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:03:27.192864 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:03:27.192872 | orchestrator | 2026-04-13 01:03:27.192880 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-04-13 01:03:27.192888 | orchestrator | Monday 13 April 2026 01:02:34 +0000 (0:00:01.026) 0:00:26.464 ********** 2026-04-13 01:03:27.192895 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-13 01:03:27.192908 | orchestrator | 2026-04-13 01:03:27.192916 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-04-13 01:03:27.192923 | orchestrator | Monday 13 April 2026 01:02:35 +0000 (0:00:00.851) 0:00:27.315 ********** 2026-04-13 01:03:27.192931 | orchestrator | [WARNING]: Skipped 2026-04-13 01:03:27.192939 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-13 01:03:27.192948 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-04-13 01:03:27.192955 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-13 01:03:27.192963 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-04-13 01:03:27.192971 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-13 01:03:27.192979 | orchestrator | [WARNING]: Skipped 2026-04-13 01:03:27.192987 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-13 01:03:27.192995 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-04-13 01:03:27.193002 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-13 01:03:27.193010 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-04-13 01:03:27.193018 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-13 01:03:27.193026 | orchestrator | [WARNING]: Skipped 2026-04-13 01:03:27.193034 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-13 01:03:27.193041 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-04-13 01:03:27.193049 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-13 01:03:27.193057 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-04-13 01:03:27.193065 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-13 01:03:27.193073 | orchestrator | [WARNING]: Skipped 2026-04-13 01:03:27.193080 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-13 01:03:27.193102 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-04-13 01:03:27.193110 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-13 01:03:27.193118 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-04-13 01:03:27.193126 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-13 01:03:27.193151 | orchestrator | [WARNING]: Skipped 2026-04-13 01:03:27.193160 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-13 01:03:27.193186 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-04-13 01:03:27.193194 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-13 01:03:27.193210 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-04-13 01:03:27.193218 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-13 01:03:27.193226 | orchestrator | [WARNING]: Skipped 2026-04-13 01:03:27.193252 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-13 01:03:27.193260 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-04-13 01:03:27.193268 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-13 01:03:27.193276 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-04-13 01:03:27.193292 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-13 01:03:27.193300 | orchestrator | [WARNING]: Skipped 2026-04-13 01:03:27.193308 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-13 01:03:27.193316 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-04-13 01:03:27.193324 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-13 01:03:27.193332 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-04-13 01:03:27.193340 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-13 01:03:27.193348 | orchestrator | 2026-04-13 01:03:27.193362 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-04-13 01:03:27.193373 | orchestrator | Monday 13 April 2026 01:02:37 +0000 (0:00:01.879) 0:00:29.195 ********** 2026-04-13 01:03:27.193381 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-13 01:03:27.193390 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:03:27.193398 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-13 01:03:27.193406 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:03:27.193414 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-13 01:03:27.193422 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:03:27.193434 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-13 01:03:27.193443 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:03:27.193450 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-13 01:03:27.193458 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:03:27.193466 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-13 01:03:27.193474 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:03:27.193482 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-04-13 01:03:27.193490 | orchestrator | 2026-04-13 01:03:27.193498 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-04-13 01:03:27.193505 | orchestrator | Monday 13 April 2026 01:02:54 +0000 (0:00:16.452) 0:00:45.647 ********** 2026-04-13 01:03:27.193513 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-13 01:03:27.193521 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:03:27.193529 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-13 01:03:27.193537 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-13 01:03:27.193545 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:03:27.193553 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:03:27.193561 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-13 01:03:27.193568 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:03:27.193576 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-13 01:03:27.193584 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:03:27.193592 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-13 01:03:27.193600 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:03:27.193608 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-04-13 01:03:27.193616 | orchestrator | 2026-04-13 01:03:27.193623 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-04-13 01:03:27.193631 | orchestrator | Monday 13 April 2026 01:02:57 +0000 (0:00:03.566) 0:00:49.214 ********** 2026-04-13 01:03:27.193639 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-13 01:03:27.193647 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:03:27.193655 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-13 01:03:27.193663 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-13 01:03:27.193671 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:03:27.193679 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:03:27.193692 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-13 01:03:27.193699 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:03:27.193707 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-13 01:03:27.193715 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:03:27.193723 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-13 01:03:27.193731 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-04-13 01:03:27.193739 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:03:27.193747 | orchestrator | 2026-04-13 01:03:27.193755 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-04-13 01:03:27.193763 | orchestrator | Monday 13 April 2026 01:02:59 +0000 (0:00:01.710) 0:00:50.924 ********** 2026-04-13 01:03:27.193770 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-13 01:03:27.193778 | orchestrator | 2026-04-13 01:03:27.193786 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-04-13 01:03:27.193794 | orchestrator | Monday 13 April 2026 01:03:00 +0000 (0:00:00.806) 0:00:51.730 ********** 2026-04-13 01:03:27.193802 | orchestrator | skipping: [testbed-manager] 2026-04-13 01:03:27.193809 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:03:27.193817 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:03:27.193825 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:03:27.193833 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:03:27.193844 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:03:27.193852 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:03:27.193860 | orchestrator | 2026-04-13 01:03:27.193868 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-04-13 01:03:27.193876 | orchestrator | Monday 13 April 2026 01:03:01 +0000 (0:00:00.944) 0:00:52.675 ********** 2026-04-13 01:03:27.193884 | orchestrator | skipping: [testbed-manager] 2026-04-13 01:03:27.193891 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:03:27.193899 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:03:27.193907 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:03:27.193915 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:03:27.193923 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:03:27.193930 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:03:27.193938 | orchestrator | 2026-04-13 01:03:27.193950 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-04-13 01:03:27.193958 | orchestrator | Monday 13 April 2026 01:03:03 +0000 (0:00:02.015) 0:00:54.690 ********** 2026-04-13 01:03:27.193966 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-13 01:03:27.193974 | orchestrator | skipping: [testbed-manager] 2026-04-13 01:03:27.193982 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-13 01:03:27.193990 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:03:27.193998 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-13 01:03:27.194005 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:03:27.194013 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-13 01:03:27.194202 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:03:27.194213 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-13 01:03:27.194224 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:03:27.194247 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-13 01:03:27.194260 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:03:27.194283 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-13 01:03:27.194296 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:03:27.194309 | orchestrator | 2026-04-13 01:03:27.194322 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-04-13 01:03:27.194336 | orchestrator | Monday 13 April 2026 01:03:04 +0000 (0:00:01.703) 0:00:56.394 ********** 2026-04-13 01:03:27.194349 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-13 01:03:27.194362 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:03:27.194376 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-13 01:03:27.194389 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:03:27.194403 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-13 01:03:27.194417 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:03:27.194431 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-13 01:03:27.194444 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:03:27.194457 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-13 01:03:27.194470 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:03:27.194484 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-04-13 01:03:27.194497 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-13 01:03:27.194511 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:03:27.194525 | orchestrator | 2026-04-13 01:03:27.194538 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-04-13 01:03:27.194552 | orchestrator | Monday 13 April 2026 01:03:06 +0000 (0:00:01.939) 0:00:58.334 ********** 2026-04-13 01:03:27.194565 | orchestrator | [WARNING]: Skipped 2026-04-13 01:03:27.194579 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-04-13 01:03:27.194592 | orchestrator | due to this access issue: 2026-04-13 01:03:27.194605 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-04-13 01:03:27.194618 | orchestrator | not a directory 2026-04-13 01:03:27.194631 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-13 01:03:27.194644 | orchestrator | 2026-04-13 01:03:27.194658 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-04-13 01:03:27.194671 | orchestrator | Monday 13 April 2026 01:03:08 +0000 (0:00:01.260) 0:00:59.594 ********** 2026-04-13 01:03:27.194685 | orchestrator | skipping: [testbed-manager] 2026-04-13 01:03:27.194693 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:03:27.194701 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:03:27.194708 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:03:27.194716 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:03:27.194724 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:03:27.194732 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:03:27.194739 | orchestrator | 2026-04-13 01:03:27.194747 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-04-13 01:03:27.194755 | orchestrator | Monday 13 April 2026 01:03:08 +0000 (0:00:00.893) 0:01:00.488 ********** 2026-04-13 01:03:27.194763 | orchestrator | skipping: [testbed-manager] 2026-04-13 01:03:27.194770 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:03:27.194778 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:03:27.194786 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:03:27.194799 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:03:27.194807 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:03:27.194815 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:03:27.194822 | orchestrator | 2026-04-13 01:03:27.194836 | orchestrator | TASK [service-check-containers : prometheus | Check containers] **************** 2026-04-13 01:03:27.194844 | orchestrator | Monday 13 April 2026 01:03:09 +0000 (0:00:00.955) 0:01:01.443 ********** 2026-04-13 01:03:27.194859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:03:27.194869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:03:27.194878 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-13 01:03:27.194888 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:03:27.194896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:03:27.194904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.194920 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:03:27.194936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.194945 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:03:27.194953 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-13 01:03:27.194961 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.194970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.194978 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.194991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.195013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.195022 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.195030 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.195038 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.195046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.195054 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.195063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.195079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.195093 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 01:03:27.195103 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.195111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-13 01:03:27.195119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.195192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.195213 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.195227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-13 01:03:27.195236 | orchestrator | 2026-04-13 01:03:27.195244 | orchestrator | TASK [service-check-containers : prometheus | Notify handlers to restart containers] *** 2026-04-13 01:03:27.195252 | orchestrator | Monday 13 April 2026 01:03:14 +0000 (0:00:04.302) 0:01:05.745 ********** 2026-04-13 01:03:27.195260 | orchestrator | changed: [testbed-manager] => { 2026-04-13 01:03:27.195268 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 01:03:27.195276 | orchestrator | } 2026-04-13 01:03:27.195284 | orchestrator | changed: [testbed-node-0] => { 2026-04-13 01:03:27.195292 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 01:03:27.195300 | orchestrator | } 2026-04-13 01:03:27.195308 | orchestrator | changed: [testbed-node-1] => { 2026-04-13 01:03:27.195315 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 01:03:27.195323 | orchestrator | } 2026-04-13 01:03:27.195331 | orchestrator | changed: [testbed-node-2] => { 2026-04-13 01:03:27.195339 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 01:03:27.195347 | orchestrator | } 2026-04-13 01:03:27.195355 | orchestrator | changed: [testbed-node-3] => { 2026-04-13 01:03:27.195363 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 01:03:27.195371 | orchestrator | } 2026-04-13 01:03:27.195378 | orchestrator | changed: [testbed-node-4] => { 2026-04-13 01:03:27.195386 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 01:03:27.195394 | orchestrator | } 2026-04-13 01:03:27.195402 | orchestrator | changed: [testbed-node-5] => { 2026-04-13 01:03:27.195410 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 01:03:27.195418 | orchestrator | } 2026-04-13 01:03:27.195426 | orchestrator | 2026-04-13 01:03:27.195434 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-13 01:03:27.195442 | orchestrator | Monday 13 April 2026 01:03:15 +0000 (0:00:00.835) 0:01:06.580 ********** 2026-04-13 01:03:27.195450 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-13 01:03:27.195464 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 01:03:27.195473 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 01:03:27.195490 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 01:03:27.195500 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:03:27.195508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 01:03:27.195517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:03:27.195529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:03:27.195538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 01:03:27.195549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:03:27.195558 | orchestrator | skipping: [testbed-manager] 2026-04-13 01:03:27.195570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 01:03:27.195579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:03:27.195587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:03:27.195596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 01:03:27.195608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:03:27.195617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 01:03:27.195625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:03:27.195636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:03:27.195650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 01:03:27.195659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-13 01:03:27.195667 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:03:27.195675 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:03:27.195683 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:03:27.195691 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 01:03:27.195704 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 01:03:27.195712 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-13 01:03:27.195720 | orchestrator | skipping: [testbed-node-3] 2026-04-13 01:03:27.195727 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 01:03:27.195736 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 01:03:27.195747 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-13 01:03:27.195754 | orchestrator | skipping: [testbed-node-4] 2026-04-13 01:03:27.195761 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-13 01:03:27.195768 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-13 01:03:27.195779 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-13 01:03:27.195786 | orchestrator | skipping: [testbed-node-5] 2026-04-13 01:03:27.195792 | orchestrator | 2026-04-13 01:03:27.195799 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-04-13 01:03:27.195806 | orchestrator | Monday 13 April 2026 01:03:16 +0000 (0:00:01.878) 0:01:08.459 ********** 2026-04-13 01:03:27.195813 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-13 01:03:27.195820 | orchestrator | skipping: [testbed-manager] 2026-04-13 01:03:27.195826 | orchestrator | 2026-04-13 01:03:27.195833 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-13 01:03:27.195840 | orchestrator | Monday 13 April 2026 01:03:18 +0000 (0:00:01.066) 0:01:09.526 ********** 2026-04-13 01:03:27.195846 | orchestrator | 2026-04-13 01:03:27.195853 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-13 01:03:27.195860 | orchestrator | Monday 13 April 2026 01:03:18 +0000 (0:00:00.065) 0:01:09.591 ********** 2026-04-13 01:03:27.195866 | orchestrator | 2026-04-13 01:03:27.195873 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-13 01:03:27.195880 | orchestrator | Monday 13 April 2026 01:03:18 +0000 (0:00:00.202) 0:01:09.793 ********** 2026-04-13 01:03:27.195886 | orchestrator | 2026-04-13 01:03:27.195893 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-13 01:03:27.195900 | orchestrator | Monday 13 April 2026 01:03:18 +0000 (0:00:00.059) 0:01:09.852 ********** 2026-04-13 01:03:27.195906 | orchestrator | 2026-04-13 01:03:27.195913 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-13 01:03:27.195919 | orchestrator | Monday 13 April 2026 01:03:18 +0000 (0:00:00.059) 0:01:09.911 ********** 2026-04-13 01:03:27.195926 | orchestrator | 2026-04-13 01:03:27.195933 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-13 01:03:27.195939 | orchestrator | Monday 13 April 2026 01:03:18 +0000 (0:00:00.057) 0:01:09.969 ********** 2026-04-13 01:03:27.195946 | orchestrator | 2026-04-13 01:03:27.195953 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-13 01:03:27.195960 | orchestrator | Monday 13 April 2026 01:03:18 +0000 (0:00:00.077) 0:01:10.046 ********** 2026-04-13 01:03:27.195966 | orchestrator | 2026-04-13 01:03:27.195973 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-04-13 01:03:27.195980 | orchestrator | Monday 13 April 2026 01:03:18 +0000 (0:00:00.081) 0:01:10.128 ********** 2026-04-13 01:03:27.195995 | orchestrator | fatal: [testbed-manager]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=3.2.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_hm4nrgw0/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_hm4nrgw0/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_hm4nrgw0/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_hm4nrgw0/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=3.2.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-server: Internal Server Error (\"unknown: repository kolla/release/2024.2/prometheus-server not found\")\\n'"} 2026-04-13 01:03:27.196010 | orchestrator | 2026-04-13 01:03:27.196017 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-04-13 01:03:27.196023 | orchestrator | Monday 13 April 2026 01:03:20 +0000 (0:00:02.368) 0:01:12.496 ********** 2026-04-13 01:03:27.196038 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_lyg0h_op/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_lyg0h_op/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_lyg0h_op/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_lyg0h_op/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-node-exporter: Internal Server Error (\"unknown: repository kolla/release/2024.2/prometheus-node-exporter not found\")\\n'"} 2026-04-13 01:03:27.196051 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_b744uotd/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_b744uotd/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_b744uotd/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_b744uotd/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-node-exporter: Internal Server Error (\"unknown: repository kolla/release/2024.2/prometheus-node-exporter not found\")\\n'"} 2026-04-13 01:03:27.196067 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_yux_dkry/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_yux_dkry/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_yux_dkry/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_yux_dkry/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-node-exporter: Internal Server Error (\"unknown: repository kolla/release/2024.2/prometheus-node-exporter not found\")\\n'"} 2026-04-13 01:03:27.196083 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_uek909t1/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_uek909t1/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_uek909t1/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_uek909t1/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-node-exporter: Internal Server Error (\"unknown: repository kolla/release/2024.2/prometheus-node-exporter not found\")\\n'"} 2026-04-13 01:03:27.196100 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_950dhuco/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_950dhuco/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_950dhuco/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_950dhuco/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-node-exporter: Internal Server Error (\"unknown: repository kolla/release/2024.2/prometheus-node-exporter not found\")\\n'"} 2026-04-13 01:03:27.196117 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_pg5ftaas/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_pg5ftaas/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_pg5ftaas/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_pg5ftaas/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-node-exporter: Internal Server Error (\"unknown: repository kolla/release/2024.2/prometheus-node-exporter not found\")\\n'"} 2026-04-13 01:03:27.196143 | orchestrator | 2026-04-13 01:03:27.196152 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 01:03:27.196159 | orchestrator | testbed-manager : ok=18  changed=9  unreachable=0 failed=1  skipped=10  rescued=0 ignored=0 2026-04-13 01:03:27.196167 | orchestrator | testbed-node-0 : ok=11  changed=6  unreachable=0 failed=1  skipped=12  rescued=0 ignored=0 2026-04-13 01:03:27.196174 | orchestrator | testbed-node-1 : ok=11  changed=6  unreachable=0 failed=1  skipped=12  rescued=0 ignored=0 2026-04-13 01:03:27.196181 | orchestrator | testbed-node-2 : ok=11  changed=6  unreachable=0 failed=1  skipped=12  rescued=0 ignored=0 2026-04-13 01:03:27.196187 | orchestrator | testbed-node-3 : ok=10  changed=5  unreachable=0 failed=1  skipped=13  rescued=0 ignored=0 2026-04-13 01:03:27.196194 | orchestrator | testbed-node-4 : ok=10  changed=5  unreachable=0 failed=1  skipped=13  rescued=0 ignored=0 2026-04-13 01:03:27.196201 | orchestrator | testbed-node-5 : ok=10  changed=5  unreachable=0 failed=1  skipped=13  rescued=0 ignored=0 2026-04-13 01:03:27.196207 | orchestrator | 2026-04-13 01:03:27.196214 | orchestrator | 2026-04-13 01:03:27.196221 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 01:03:27.196228 | orchestrator | Monday 13 April 2026 01:03:25 +0000 (0:00:04.075) 0:01:16.572 ********** 2026-04-13 01:03:27.196234 | orchestrator | =============================================================================== 2026-04-13 01:03:27.196241 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 16.45s 2026-04-13 01:03:27.196248 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.88s 2026-04-13 01:03:27.196254 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.51s 2026-04-13 01:03:27.196261 | orchestrator | service-check-containers : prometheus | Check containers ---------------- 4.30s 2026-04-13 01:03:27.196268 | orchestrator | prometheus : Restart prometheus-node-exporter container ----------------- 4.08s 2026-04-13 01:03:27.196274 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.75s 2026-04-13 01:03:27.196281 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.57s 2026-04-13 01:03:27.196287 | orchestrator | prometheus : Restart prometheus-server container ------------------------ 2.37s 2026-04-13 01:03:27.196294 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.31s 2026-04-13 01:03:27.196301 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.02s 2026-04-13 01:03:27.196312 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 1.94s 2026-04-13 01:03:27.196319 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.88s 2026-04-13 01:03:27.196326 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.88s 2026-04-13 01:03:27.196335 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 1.86s 2026-04-13 01:03:27.196342 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 1.71s 2026-04-13 01:03:27.196349 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 1.70s 2026-04-13 01:03:27.196355 | orchestrator | prometheus : include_tasks ---------------------------------------------- 1.38s 2026-04-13 01:03:27.196362 | orchestrator | prometheus : include_tasks ---------------------------------------------- 1.31s 2026-04-13 01:03:27.196369 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.27s 2026-04-13 01:03:27.196376 | orchestrator | prometheus : Find extra prometheus server config files ------------------ 1.26s 2026-04-13 01:03:27.196386 | orchestrator | 2026-04-13 01:03:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:03:27.196393 | orchestrator | 2026-04-13 01:03:27 | INFO  | Task 41f844d4-617e-4d74-925f-6cf83c1387b4 is in state STARTED 2026-04-13 01:03:27.196399 | orchestrator | 2026-04-13 01:03:27 | INFO  | Task 0f89b1d4-083e-43a1-925b-60561b5bbf5b is in state STARTED 2026-04-13 01:03:27.196406 | orchestrator | 2026-04-13 01:03:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:30.240680 | orchestrator | 2026-04-13 01:03:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:03:30.243052 | orchestrator | 2026-04-13 01:03:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:03:30.245079 | orchestrator | 2026-04-13 01:03:30 | INFO  | Task 41f844d4-617e-4d74-925f-6cf83c1387b4 is in state STARTED 2026-04-13 01:03:30.247312 | orchestrator | 2026-04-13 01:03:30 | INFO  | Task 0f89b1d4-083e-43a1-925b-60561b5bbf5b is in state SUCCESS 2026-04-13 01:03:30.247343 | orchestrator | 2026-04-13 01:03:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:33.297478 | orchestrator | 2026-04-13 01:03:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:03:33.299342 | orchestrator | 2026-04-13 01:03:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:03:33.301344 | orchestrator | 2026-04-13 01:03:33 | INFO  | Task 41f844d4-617e-4d74-925f-6cf83c1387b4 is in state STARTED 2026-04-13 01:03:33.301422 | orchestrator | 2026-04-13 01:03:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:36.347691 | orchestrator | 2026-04-13 01:03:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:03:36.351236 | orchestrator | 2026-04-13 01:03:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:03:36.352218 | orchestrator | 2026-04-13 01:03:36 | INFO  | Task 41f844d4-617e-4d74-925f-6cf83c1387b4 is in state STARTED 2026-04-13 01:03:36.352304 | orchestrator | 2026-04-13 01:03:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:39.400307 | orchestrator | 2026-04-13 01:03:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:03:39.401946 | orchestrator | 2026-04-13 01:03:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:03:39.403794 | orchestrator | 2026-04-13 01:03:39 | INFO  | Task 41f844d4-617e-4d74-925f-6cf83c1387b4 is in state STARTED 2026-04-13 01:03:39.403875 | orchestrator | 2026-04-13 01:03:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:42.453233 | orchestrator | 2026-04-13 01:03:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:03:42.454618 | orchestrator | 2026-04-13 01:03:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:03:42.457275 | orchestrator | 2026-04-13 01:03:42 | INFO  | Task 41f844d4-617e-4d74-925f-6cf83c1387b4 is in state STARTED 2026-04-13 01:03:42.457314 | orchestrator | 2026-04-13 01:03:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:45.494627 | orchestrator | 2026-04-13 01:03:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:03:45.496948 | orchestrator | 2026-04-13 01:03:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:03:45.499309 | orchestrator | 2026-04-13 01:03:45 | INFO  | Task 41f844d4-617e-4d74-925f-6cf83c1387b4 is in state STARTED 2026-04-13 01:03:45.499365 | orchestrator | 2026-04-13 01:03:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:48.546282 | orchestrator | 2026-04-13 01:03:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:03:48.547938 | orchestrator | 2026-04-13 01:03:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:03:48.550632 | orchestrator | 2026-04-13 01:03:48 | INFO  | Task 41f844d4-617e-4d74-925f-6cf83c1387b4 is in state SUCCESS 2026-04-13 01:03:48.552023 | orchestrator | 2026-04-13 01:03:48.552069 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-13 01:03:48.552082 | orchestrator | 2.16.14 2026-04-13 01:03:48.552095 | orchestrator | 2026-04-13 01:03:48.552106 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-04-13 01:03:48.552154 | orchestrator | 2026-04-13 01:03:48.552166 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-04-13 01:03:48.552178 | orchestrator | Monday 13 April 2026 01:02:10 +0000 (0:00:00.248) 0:00:00.248 ********** 2026-04-13 01:03:48.552189 | orchestrator | changed: [testbed-manager] 2026-04-13 01:03:48.552201 | orchestrator | 2026-04-13 01:03:48.552212 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-04-13 01:03:48.552223 | orchestrator | Monday 13 April 2026 01:02:12 +0000 (0:00:02.309) 0:00:02.557 ********** 2026-04-13 01:03:48.552234 | orchestrator | changed: [testbed-manager] 2026-04-13 01:03:48.552245 | orchestrator | 2026-04-13 01:03:48.552255 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-04-13 01:03:48.552268 | orchestrator | Monday 13 April 2026 01:02:13 +0000 (0:00:01.277) 0:00:03.834 ********** 2026-04-13 01:03:48.552279 | orchestrator | changed: [testbed-manager] 2026-04-13 01:03:48.552290 | orchestrator | 2026-04-13 01:03:48.552301 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-04-13 01:03:48.552312 | orchestrator | Monday 13 April 2026 01:02:15 +0000 (0:00:01.267) 0:00:05.102 ********** 2026-04-13 01:03:48.552323 | orchestrator | changed: [testbed-manager] 2026-04-13 01:03:48.552334 | orchestrator | 2026-04-13 01:03:48.552345 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-04-13 01:03:48.552355 | orchestrator | Monday 13 April 2026 01:02:16 +0000 (0:00:01.302) 0:00:06.405 ********** 2026-04-13 01:03:48.552366 | orchestrator | changed: [testbed-manager] 2026-04-13 01:03:48.552377 | orchestrator | 2026-04-13 01:03:48.552388 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-04-13 01:03:48.552411 | orchestrator | Monday 13 April 2026 01:02:17 +0000 (0:00:01.172) 0:00:07.577 ********** 2026-04-13 01:03:48.552422 | orchestrator | changed: [testbed-manager] 2026-04-13 01:03:48.552433 | orchestrator | 2026-04-13 01:03:48.552444 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-04-13 01:03:48.552455 | orchestrator | Monday 13 April 2026 01:02:18 +0000 (0:00:01.113) 0:00:08.690 ********** 2026-04-13 01:03:48.552492 | orchestrator | changed: [testbed-manager] 2026-04-13 01:03:48.552503 | orchestrator | 2026-04-13 01:03:48.552514 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-04-13 01:03:48.552525 | orchestrator | Monday 13 April 2026 01:02:20 +0000 (0:00:01.953) 0:00:10.644 ********** 2026-04-13 01:03:48.552535 | orchestrator | changed: [testbed-manager] 2026-04-13 01:03:48.552546 | orchestrator | 2026-04-13 01:03:48.552557 | orchestrator | TASK [Create admin user] ******************************************************* 2026-04-13 01:03:48.552567 | orchestrator | Monday 13 April 2026 01:02:21 +0000 (0:00:01.178) 0:00:11.822 ********** 2026-04-13 01:03:48.552578 | orchestrator | changed: [testbed-manager] 2026-04-13 01:03:48.552588 | orchestrator | 2026-04-13 01:03:48.552599 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-04-13 01:03:48.552613 | orchestrator | Monday 13 April 2026 01:03:03 +0000 (0:00:41.372) 0:00:53.195 ********** 2026-04-13 01:03:48.552626 | orchestrator | skipping: [testbed-manager] 2026-04-13 01:03:48.552639 | orchestrator | 2026-04-13 01:03:48.552651 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-13 01:03:48.552679 | orchestrator | 2026-04-13 01:03:48.552702 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-13 01:03:48.552715 | orchestrator | Monday 13 April 2026 01:03:03 +0000 (0:00:00.198) 0:00:53.393 ********** 2026-04-13 01:03:48.552728 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:03:48.552740 | orchestrator | 2026-04-13 01:03:48.552752 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-13 01:03:48.552764 | orchestrator | 2026-04-13 01:03:48.552776 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-13 01:03:48.552788 | orchestrator | Monday 13 April 2026 01:03:15 +0000 (0:00:12.209) 0:01:05.603 ********** 2026-04-13 01:03:48.552800 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:03:48.552813 | orchestrator | 2026-04-13 01:03:48.552825 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-13 01:03:48.552837 | orchestrator | 2026-04-13 01:03:48.552849 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-13 01:03:48.552861 | orchestrator | Monday 13 April 2026 01:03:17 +0000 (0:00:01.583) 0:01:07.187 ********** 2026-04-13 01:03:48.552874 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:03:48.552887 | orchestrator | 2026-04-13 01:03:48.552899 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 01:03:48.552913 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-13 01:03:48.552927 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 01:03:48.552940 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 01:03:48.552953 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-13 01:03:48.552965 | orchestrator | 2026-04-13 01:03:48.552978 | orchestrator | 2026-04-13 01:03:48.552989 | orchestrator | 2026-04-13 01:03:48.552999 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 01:03:48.553010 | orchestrator | Monday 13 April 2026 01:03:28 +0000 (0:00:11.475) 0:01:18.662 ********** 2026-04-13 01:03:48.553021 | orchestrator | =============================================================================== 2026-04-13 01:03:48.553044 | orchestrator | Create admin user ------------------------------------------------------ 41.37s 2026-04-13 01:03:48.553069 | orchestrator | Restart ceph manager service ------------------------------------------- 25.27s 2026-04-13 01:03:48.553081 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.31s 2026-04-13 01:03:48.553092 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.95s 2026-04-13 01:03:48.553111 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.30s 2026-04-13 01:03:48.553140 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.28s 2026-04-13 01:03:48.553151 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.27s 2026-04-13 01:03:48.553162 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.18s 2026-04-13 01:03:48.553172 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.17s 2026-04-13 01:03:48.553183 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.11s 2026-04-13 01:03:48.553194 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.20s 2026-04-13 01:03:48.553204 | orchestrator | 2026-04-13 01:03:48.553215 | orchestrator | 2026-04-13 01:03:48.553226 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-13 01:03:48.553236 | orchestrator | 2026-04-13 01:03:48.553247 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-13 01:03:48.553258 | orchestrator | Monday 13 April 2026 01:03:28 +0000 (0:00:00.294) 0:00:00.294 ********** 2026-04-13 01:03:48.553269 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:03:48.553279 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:03:48.553290 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:03:48.553301 | orchestrator | 2026-04-13 01:03:48.553312 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-13 01:03:48.553323 | orchestrator | Monday 13 April 2026 01:03:28 +0000 (0:00:00.260) 0:00:00.555 ********** 2026-04-13 01:03:48.553334 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-04-13 01:03:48.553345 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-04-13 01:03:48.553356 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-04-13 01:03:48.553367 | orchestrator | 2026-04-13 01:03:48.553378 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-04-13 01:03:48.553389 | orchestrator | 2026-04-13 01:03:48.553400 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-13 01:03:48.553411 | orchestrator | Monday 13 April 2026 01:03:28 +0000 (0:00:00.317) 0:00:00.873 ********** 2026-04-13 01:03:48.553421 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:03:48.553433 | orchestrator | 2026-04-13 01:03:48.553444 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-04-13 01:03:48.553454 | orchestrator | Monday 13 April 2026 01:03:29 +0000 (0:00:00.562) 0:00:01.436 ********** 2026-04-13 01:03:48.553469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 01:03:48.553484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 01:03:48.553516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 01:03:48.553529 | orchestrator | 2026-04-13 01:03:48.553542 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-04-13 01:03:48.553563 | orchestrator | Monday 13 April 2026 01:03:30 +0000 (0:00:01.103) 0:00:02.540 ********** 2026-04-13 01:03:48.553583 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-13 01:03:48.553600 | orchestrator | 2026-04-13 01:03:48.553620 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-13 01:03:48.553639 | orchestrator | Monday 13 April 2026 01:03:31 +0000 (0:00:00.930) 0:00:03.471 ********** 2026-04-13 01:03:48.553657 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-13 01:03:48.553677 | orchestrator | 2026-04-13 01:03:48.553697 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-04-13 01:03:48.553717 | orchestrator | Monday 13 April 2026 01:03:32 +0000 (0:00:00.505) 0:00:03.976 ********** 2026-04-13 01:03:48.553735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 01:03:48.553757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 01:03:48.553778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 01:03:48.553808 | orchestrator | 2026-04-13 01:03:48.553820 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-04-13 01:03:48.553830 | orchestrator | Monday 13 April 2026 01:03:33 +0000 (0:00:01.586) 0:00:05.563 ********** 2026-04-13 01:03:48.553855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 01:03:48.553868 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:03:48.553879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 01:03:48.553890 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:03:48.553902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 01:03:48.553913 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:03:48.553924 | orchestrator | 2026-04-13 01:03:48.553935 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-04-13 01:03:48.553946 | orchestrator | Monday 13 April 2026 01:03:34 +0000 (0:00:00.454) 0:00:06.018 ********** 2026-04-13 01:03:48.553957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 01:03:48.553974 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:03:48.553986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 01:03:48.553997 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:03:48.554075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 01:03:48.554092 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:03:48.554103 | orchestrator | 2026-04-13 01:03:48.554158 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-04-13 01:03:48.554181 | orchestrator | Monday 13 April 2026 01:03:34 +0000 (0:00:00.701) 0:00:06.719 ********** 2026-04-13 01:03:48.554201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 01:03:48.554224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 01:03:48.554237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 01:03:48.554257 | orchestrator | 2026-04-13 01:03:48.554268 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-04-13 01:03:48.554279 | orchestrator | Monday 13 April 2026 01:03:36 +0000 (0:00:01.333) 0:00:08.052 ********** 2026-04-13 01:03:48.554290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 01:03:48.554317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 01:03:48.554330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 01:03:48.554341 | orchestrator | 2026-04-13 01:03:48.554352 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-04-13 01:03:48.554363 | orchestrator | Monday 13 April 2026 01:03:37 +0000 (0:00:01.641) 0:00:09.694 ********** 2026-04-13 01:03:48.554374 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:03:48.554385 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:03:48.554395 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:03:48.554407 | orchestrator | 2026-04-13 01:03:48.554426 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-04-13 01:03:48.554442 | orchestrator | Monday 13 April 2026 01:03:38 +0000 (0:00:00.278) 0:00:09.972 ********** 2026-04-13 01:03:48.554470 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-13 01:03:48.554492 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-13 01:03:48.554521 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-13 01:03:48.554538 | orchestrator | 2026-04-13 01:03:48.554556 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-04-13 01:03:48.554572 | orchestrator | Monday 13 April 2026 01:03:39 +0000 (0:00:01.193) 0:00:11.166 ********** 2026-04-13 01:03:48.554591 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-13 01:03:48.554609 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-13 01:03:48.554628 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-13 01:03:48.554646 | orchestrator | 2026-04-13 01:03:48.554665 | orchestrator | TASK [grafana : Check if the folder for custom grafana dashboards exists] ****** 2026-04-13 01:03:48.554683 | orchestrator | Monday 13 April 2026 01:03:40 +0000 (0:00:01.205) 0:00:12.371 ********** 2026-04-13 01:03:48.554702 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-13 01:03:48.554721 | orchestrator | 2026-04-13 01:03:48.554738 | orchestrator | TASK [grafana : Remove templated Grafana dashboards] *************************** 2026-04-13 01:03:48.554755 | orchestrator | Monday 13 April 2026 01:03:41 +0000 (0:00:00.726) 0:00:13.098 ********** 2026-04-13 01:03:48.554766 | orchestrator | ok: [testbed-node-0] 2026-04-13 01:03:48.554777 | orchestrator | ok: [testbed-node-1] 2026-04-13 01:03:48.554788 | orchestrator | ok: [testbed-node-2] 2026-04-13 01:03:48.554799 | orchestrator | 2026-04-13 01:03:48.554810 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-04-13 01:03:48.554820 | orchestrator | Monday 13 April 2026 01:03:42 +0000 (0:00:00.813) 0:00:13.911 ********** 2026-04-13 01:03:48.554831 | orchestrator | changed: [testbed-node-0] 2026-04-13 01:03:48.554842 | orchestrator | changed: [testbed-node-1] 2026-04-13 01:03:48.554852 | orchestrator | changed: [testbed-node-2] 2026-04-13 01:03:48.554863 | orchestrator | 2026-04-13 01:03:48.554874 | orchestrator | TASK [service-check-containers : grafana | Check containers] ******************* 2026-04-13 01:03:48.554884 | orchestrator | Monday 13 April 2026 01:03:43 +0000 (0:00:01.207) 0:00:15.119 ********** 2026-04-13 01:03:48.554911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 01:03:48.554936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 01:03:48.554948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-13 01:03:48.554968 | orchestrator | 2026-04-13 01:03:48.554979 | orchestrator | TASK [service-check-containers : grafana | Notify handlers to restart containers] *** 2026-04-13 01:03:48.554990 | orchestrator | Monday 13 April 2026 01:03:44 +0000 (0:00:00.968) 0:00:16.087 ********** 2026-04-13 01:03:48.555001 | orchestrator | changed: [testbed-node-0] => { 2026-04-13 01:03:48.555012 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 01:03:48.555023 | orchestrator | } 2026-04-13 01:03:48.555034 | orchestrator | changed: [testbed-node-1] => { 2026-04-13 01:03:48.555050 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 01:03:48.555068 | orchestrator | } 2026-04-13 01:03:48.555088 | orchestrator | changed: [testbed-node-2] => { 2026-04-13 01:03:48.555104 | orchestrator |  "msg": "Notifying handlers" 2026-04-13 01:03:48.555190 | orchestrator | } 2026-04-13 01:03:48.555211 | orchestrator | 2026-04-13 01:03:48.555230 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-13 01:03:48.555248 | orchestrator | Monday 13 April 2026 01:03:44 +0000 (0:00:00.340) 0:00:16.428 ********** 2026-04-13 01:03:48.555266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 01:03:48.555284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 01:03:48.555303 | orchestrator | skipping: [testbed-node-0] 2026-04-13 01:03:48.555318 | orchestrator | skipping: [testbed-node-1] 2026-04-13 01:03:48.555360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-13 01:03:48.555395 | orchestrator | skipping: [testbed-node-2] 2026-04-13 01:03:48.555414 | orchestrator | 2026-04-13 01:03:48.555432 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-04-13 01:03:48.555451 | orchestrator | Monday 13 April 2026 01:03:45 +0000 (0:00:00.825) 0:00:17.254 ********** 2026-04-13 01:03:48.555469 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-13 01:03:48.555488 | orchestrator | 2026-04-13 01:03:48.555506 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-13 01:03:48.555527 | orchestrator | testbed-node-0 : ok=16  changed=9  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-13 01:03:48.555546 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-13 01:03:48.555562 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-13 01:03:48.555579 | orchestrator | 2026-04-13 01:03:48.555597 | orchestrator | 2026-04-13 01:03:48.555613 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-13 01:03:48.555630 | orchestrator | Monday 13 April 2026 01:03:46 +0000 (0:00:00.738) 0:00:17.993 ********** 2026-04-13 01:03:48.555647 | orchestrator | =============================================================================== 2026-04-13 01:03:48.555664 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.64s 2026-04-13 01:03:48.555680 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.59s 2026-04-13 01:03:48.555697 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.33s 2026-04-13 01:03:48.555708 | orchestrator | grafana : Copying over custom dashboards -------------------------------- 1.21s 2026-04-13 01:03:48.555718 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.21s 2026-04-13 01:03:48.555727 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.19s 2026-04-13 01:03:48.555737 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.10s 2026-04-13 01:03:48.555746 | orchestrator | service-check-containers : grafana | Check containers ------------------- 0.97s 2026-04-13 01:03:48.555756 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.93s 2026-04-13 01:03:48.555765 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.83s 2026-04-13 01:03:48.555775 | orchestrator | grafana : Remove templated Grafana dashboards --------------------------- 0.81s 2026-04-13 01:03:48.555784 | orchestrator | grafana : Creating grafana database ------------------------------------- 0.74s 2026-04-13 01:03:48.555794 | orchestrator | grafana : Check if the folder for custom grafana dashboards exists ------ 0.73s 2026-04-13 01:03:48.555803 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.70s 2026-04-13 01:03:48.555813 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.56s 2026-04-13 01:03:48.555822 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.51s 2026-04-13 01:03:48.555832 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.45s 2026-04-13 01:03:48.555841 | orchestrator | service-check-containers : grafana | Notify handlers to restart containers --- 0.34s 2026-04-13 01:03:48.555851 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.32s 2026-04-13 01:03:48.555861 | orchestrator | grafana : Copying over extra configuration file ------------------------- 0.28s 2026-04-13 01:03:48.555870 | orchestrator | 2026-04-13 01:03:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:51.589922 | orchestrator | 2026-04-13 01:03:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:03:51.592933 | orchestrator | 2026-04-13 01:03:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:03:51.592973 | orchestrator | 2026-04-13 01:03:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:54.641621 | orchestrator | 2026-04-13 01:03:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:03:54.643554 | orchestrator | 2026-04-13 01:03:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:03:54.643622 | orchestrator | 2026-04-13 01:03:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:03:57.696709 | orchestrator | 2026-04-13 01:03:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:03:57.698825 | orchestrator | 2026-04-13 01:03:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:03:57.698885 | orchestrator | 2026-04-13 01:03:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:00.746340 | orchestrator | 2026-04-13 01:04:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:04:00.748006 | orchestrator | 2026-04-13 01:04:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:04:00.748055 | orchestrator | 2026-04-13 01:04:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:03.793386 | orchestrator | 2026-04-13 01:04:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:04:03.795037 | orchestrator | 2026-04-13 01:04:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:04:03.795249 | orchestrator | 2026-04-13 01:04:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:06.841951 | orchestrator | 2026-04-13 01:04:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:04:06.842693 | orchestrator | 2026-04-13 01:04:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:04:06.843024 | orchestrator | 2026-04-13 01:04:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:09.887968 | orchestrator | 2026-04-13 01:04:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:04:09.889724 | orchestrator | 2026-04-13 01:04:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:04:09.889770 | orchestrator | 2026-04-13 01:04:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:12.948603 | orchestrator | 2026-04-13 01:04:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:04:12.949715 | orchestrator | 2026-04-13 01:04:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:04:12.950577 | orchestrator | 2026-04-13 01:04:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:16.004730 | orchestrator | 2026-04-13 01:04:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:04:16.006770 | orchestrator | 2026-04-13 01:04:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:04:16.006844 | orchestrator | 2026-04-13 01:04:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:19.056539 | orchestrator | 2026-04-13 01:04:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:04:19.059201 | orchestrator | 2026-04-13 01:04:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:04:19.059617 | orchestrator | 2026-04-13 01:04:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:22.111355 | orchestrator | 2026-04-13 01:04:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:04:22.111479 | orchestrator | 2026-04-13 01:04:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:04:22.111496 | orchestrator | 2026-04-13 01:04:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:25.173901 | orchestrator | 2026-04-13 01:04:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:04:25.175705 | orchestrator | 2026-04-13 01:04:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:04:25.175764 | orchestrator | 2026-04-13 01:04:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:28.258466 | orchestrator | 2026-04-13 01:04:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:04:28.258815 | orchestrator | 2026-04-13 01:04:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:04:28.258845 | orchestrator | 2026-04-13 01:04:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:31.343664 | orchestrator | 2026-04-13 01:04:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:04:31.344069 | orchestrator | 2026-04-13 01:04:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:04:31.344108 | orchestrator | 2026-04-13 01:04:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:34.408992 | orchestrator | 2026-04-13 01:04:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:04:34.412274 | orchestrator | 2026-04-13 01:04:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:04:34.412319 | orchestrator | 2026-04-13 01:04:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:37.461747 | orchestrator | 2026-04-13 01:04:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:04:37.465810 | orchestrator | 2026-04-13 01:04:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:04:37.465868 | orchestrator | 2026-04-13 01:04:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:40.516614 | orchestrator | 2026-04-13 01:04:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:04:40.517535 | orchestrator | 2026-04-13 01:04:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:04:40.517737 | orchestrator | 2026-04-13 01:04:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:43.563331 | orchestrator | 2026-04-13 01:04:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:04:43.564321 | orchestrator | 2026-04-13 01:04:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:04:43.564359 | orchestrator | 2026-04-13 01:04:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:46.613949 | orchestrator | 2026-04-13 01:04:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:04:46.615464 | orchestrator | 2026-04-13 01:04:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:04:46.615530 | orchestrator | 2026-04-13 01:04:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:49.655422 | orchestrator | 2026-04-13 01:04:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:04:49.657160 | orchestrator | 2026-04-13 01:04:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:04:49.657444 | orchestrator | 2026-04-13 01:04:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:52.701049 | orchestrator | 2026-04-13 01:04:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:04:52.703222 | orchestrator | 2026-04-13 01:04:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:04:52.703305 | orchestrator | 2026-04-13 01:04:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:55.739639 | orchestrator | 2026-04-13 01:04:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:04:55.740139 | orchestrator | 2026-04-13 01:04:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:04:55.740187 | orchestrator | 2026-04-13 01:04:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:04:58.787488 | orchestrator | 2026-04-13 01:04:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:04:58.789909 | orchestrator | 2026-04-13 01:04:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:04:58.789989 | orchestrator | 2026-04-13 01:04:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:01.831661 | orchestrator | 2026-04-13 01:05:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:05:01.833898 | orchestrator | 2026-04-13 01:05:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:05:01.833995 | orchestrator | 2026-04-13 01:05:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:04.887208 | orchestrator | 2026-04-13 01:05:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:05:04.889094 | orchestrator | 2026-04-13 01:05:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:05:04.889146 | orchestrator | 2026-04-13 01:05:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:07.932035 | orchestrator | 2026-04-13 01:05:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:05:07.936267 | orchestrator | 2026-04-13 01:05:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:05:07.936337 | orchestrator | 2026-04-13 01:05:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:10.981388 | orchestrator | 2026-04-13 01:05:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:05:10.983356 | orchestrator | 2026-04-13 01:05:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:05:10.983409 | orchestrator | 2026-04-13 01:05:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:14.031292 | orchestrator | 2026-04-13 01:05:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:05:14.032228 | orchestrator | 2026-04-13 01:05:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:05:14.032268 | orchestrator | 2026-04-13 01:05:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:17.080215 | orchestrator | 2026-04-13 01:05:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:05:17.082428 | orchestrator | 2026-04-13 01:05:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:05:17.082483 | orchestrator | 2026-04-13 01:05:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:20.124786 | orchestrator | 2026-04-13 01:05:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:05:20.126312 | orchestrator | 2026-04-13 01:05:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:05:20.126346 | orchestrator | 2026-04-13 01:05:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:23.183505 | orchestrator | 2026-04-13 01:05:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:05:23.185289 | orchestrator | 2026-04-13 01:05:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:05:23.185324 | orchestrator | 2026-04-13 01:05:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:26.237682 | orchestrator | 2026-04-13 01:05:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:05:26.239728 | orchestrator | 2026-04-13 01:05:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:05:26.239766 | orchestrator | 2026-04-13 01:05:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:29.298224 | orchestrator | 2026-04-13 01:05:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:05:29.299706 | orchestrator | 2026-04-13 01:05:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:05:29.299753 | orchestrator | 2026-04-13 01:05:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:32.353739 | orchestrator | 2026-04-13 01:05:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:05:32.355549 | orchestrator | 2026-04-13 01:05:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:05:32.355609 | orchestrator | 2026-04-13 01:05:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:35.406743 | orchestrator | 2026-04-13 01:05:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:05:35.408709 | orchestrator | 2026-04-13 01:05:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:05:35.408745 | orchestrator | 2026-04-13 01:05:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:38.459418 | orchestrator | 2026-04-13 01:05:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:05:38.461577 | orchestrator | 2026-04-13 01:05:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:05:38.461653 | orchestrator | 2026-04-13 01:05:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:41.509907 | orchestrator | 2026-04-13 01:05:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:05:41.511662 | orchestrator | 2026-04-13 01:05:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:05:41.511689 | orchestrator | 2026-04-13 01:05:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:44.556226 | orchestrator | 2026-04-13 01:05:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:05:44.557484 | orchestrator | 2026-04-13 01:05:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:05:44.557559 | orchestrator | 2026-04-13 01:05:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:47.610532 | orchestrator | 2026-04-13 01:05:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:05:47.611478 | orchestrator | 2026-04-13 01:05:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:05:47.611556 | orchestrator | 2026-04-13 01:05:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:50.653928 | orchestrator | 2026-04-13 01:05:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:05:50.655166 | orchestrator | 2026-04-13 01:05:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:05:50.655216 | orchestrator | 2026-04-13 01:05:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:53.703536 | orchestrator | 2026-04-13 01:05:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:05:53.705505 | orchestrator | 2026-04-13 01:05:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:05:53.705551 | orchestrator | 2026-04-13 01:05:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:56.748649 | orchestrator | 2026-04-13 01:05:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:05:56.750314 | orchestrator | 2026-04-13 01:05:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:05:56.750396 | orchestrator | 2026-04-13 01:05:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:05:59.790221 | orchestrator | 2026-04-13 01:05:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:05:59.791749 | orchestrator | 2026-04-13 01:05:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:05:59.791793 | orchestrator | 2026-04-13 01:05:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:02.837584 | orchestrator | 2026-04-13 01:06:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:06:02.838972 | orchestrator | 2026-04-13 01:06:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:06:02.839123 | orchestrator | 2026-04-13 01:06:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:05.889936 | orchestrator | 2026-04-13 01:06:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:06:05.892174 | orchestrator | 2026-04-13 01:06:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:06:05.892383 | orchestrator | 2026-04-13 01:06:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:08.945934 | orchestrator | 2026-04-13 01:06:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:06:08.946122 | orchestrator | 2026-04-13 01:06:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:06:08.946140 | orchestrator | 2026-04-13 01:06:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:11.992581 | orchestrator | 2026-04-13 01:06:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:06:11.994935 | orchestrator | 2026-04-13 01:06:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:06:11.994977 | orchestrator | 2026-04-13 01:06:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:15.046550 | orchestrator | 2026-04-13 01:06:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:06:15.047936 | orchestrator | 2026-04-13 01:06:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:06:15.047983 | orchestrator | 2026-04-13 01:06:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:18.093317 | orchestrator | 2026-04-13 01:06:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:06:18.095966 | orchestrator | 2026-04-13 01:06:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:06:18.096030 | orchestrator | 2026-04-13 01:06:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:21.139778 | orchestrator | 2026-04-13 01:06:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:06:21.140236 | orchestrator | 2026-04-13 01:06:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:06:21.140271 | orchestrator | 2026-04-13 01:06:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:24.181711 | orchestrator | 2026-04-13 01:06:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:06:24.183529 | orchestrator | 2026-04-13 01:06:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:06:24.183598 | orchestrator | 2026-04-13 01:06:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:27.233399 | orchestrator | 2026-04-13 01:06:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:06:27.237686 | orchestrator | 2026-04-13 01:06:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:06:27.237740 | orchestrator | 2026-04-13 01:06:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:30.290185 | orchestrator | 2026-04-13 01:06:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:06:30.292030 | orchestrator | 2026-04-13 01:06:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:06:30.292715 | orchestrator | 2026-04-13 01:06:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:33.347984 | orchestrator | 2026-04-13 01:06:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:06:33.350900 | orchestrator | 2026-04-13 01:06:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:06:33.350996 | orchestrator | 2026-04-13 01:06:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:36.410558 | orchestrator | 2026-04-13 01:06:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:06:36.412291 | orchestrator | 2026-04-13 01:06:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:06:36.412342 | orchestrator | 2026-04-13 01:06:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:39.462504 | orchestrator | 2026-04-13 01:06:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:06:39.464962 | orchestrator | 2026-04-13 01:06:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:06:39.465124 | orchestrator | 2026-04-13 01:06:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:42.518355 | orchestrator | 2026-04-13 01:06:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:06:42.519607 | orchestrator | 2026-04-13 01:06:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:06:42.519648 | orchestrator | 2026-04-13 01:06:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:45.566527 | orchestrator | 2026-04-13 01:06:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:06:45.568327 | orchestrator | 2026-04-13 01:06:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:06:45.568365 | orchestrator | 2026-04-13 01:06:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:48.624893 | orchestrator | 2026-04-13 01:06:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:06:48.626579 | orchestrator | 2026-04-13 01:06:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:06:48.626967 | orchestrator | 2026-04-13 01:06:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:51.676178 | orchestrator | 2026-04-13 01:06:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:06:51.677584 | orchestrator | 2026-04-13 01:06:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:06:51.677626 | orchestrator | 2026-04-13 01:06:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:54.721500 | orchestrator | 2026-04-13 01:06:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:06:54.723516 | orchestrator | 2026-04-13 01:06:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:06:54.723549 | orchestrator | 2026-04-13 01:06:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:06:57.767718 | orchestrator | 2026-04-13 01:06:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:06:57.769852 | orchestrator | 2026-04-13 01:06:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:06:57.769903 | orchestrator | 2026-04-13 01:06:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:00.810958 | orchestrator | 2026-04-13 01:07:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:07:00.812279 | orchestrator | 2026-04-13 01:07:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:07:00.812321 | orchestrator | 2026-04-13 01:07:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:03.861540 | orchestrator | 2026-04-13 01:07:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:07:03.863807 | orchestrator | 2026-04-13 01:07:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:07:03.863842 | orchestrator | 2026-04-13 01:07:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:06.908726 | orchestrator | 2026-04-13 01:07:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:07:06.910220 | orchestrator | 2026-04-13 01:07:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:07:06.910413 | orchestrator | 2026-04-13 01:07:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:09.956326 | orchestrator | 2026-04-13 01:07:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:07:09.957830 | orchestrator | 2026-04-13 01:07:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:07:09.958193 | orchestrator | 2026-04-13 01:07:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:13.006276 | orchestrator | 2026-04-13 01:07:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:07:13.007764 | orchestrator | 2026-04-13 01:07:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:07:13.007805 | orchestrator | 2026-04-13 01:07:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:16.049967 | orchestrator | 2026-04-13 01:07:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:07:16.050743 | orchestrator | 2026-04-13 01:07:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:07:16.050800 | orchestrator | 2026-04-13 01:07:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:19.094822 | orchestrator | 2026-04-13 01:07:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:07:19.097010 | orchestrator | 2026-04-13 01:07:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:07:19.097068 | orchestrator | 2026-04-13 01:07:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:22.138472 | orchestrator | 2026-04-13 01:07:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:07:22.139939 | orchestrator | 2026-04-13 01:07:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:07:22.140011 | orchestrator | 2026-04-13 01:07:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:25.184857 | orchestrator | 2026-04-13 01:07:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:07:25.185786 | orchestrator | 2026-04-13 01:07:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:07:25.185824 | orchestrator | 2026-04-13 01:07:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:28.232231 | orchestrator | 2026-04-13 01:07:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:07:28.235177 | orchestrator | 2026-04-13 01:07:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:07:28.235221 | orchestrator | 2026-04-13 01:07:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:31.281286 | orchestrator | 2026-04-13 01:07:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:07:31.283876 | orchestrator | 2026-04-13 01:07:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:07:31.284374 | orchestrator | 2026-04-13 01:07:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:34.333080 | orchestrator | 2026-04-13 01:07:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:07:34.336475 | orchestrator | 2026-04-13 01:07:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:07:34.336650 | orchestrator | 2026-04-13 01:07:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:37.383227 | orchestrator | 2026-04-13 01:07:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:07:37.386167 | orchestrator | 2026-04-13 01:07:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:07:37.386244 | orchestrator | 2026-04-13 01:07:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:40.419242 | orchestrator | 2026-04-13 01:07:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:07:40.419495 | orchestrator | 2026-04-13 01:07:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:07:40.419530 | orchestrator | 2026-04-13 01:07:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:43.464817 | orchestrator | 2026-04-13 01:07:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:07:43.466598 | orchestrator | 2026-04-13 01:07:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:07:43.466675 | orchestrator | 2026-04-13 01:07:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:46.522589 | orchestrator | 2026-04-13 01:07:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:07:46.524784 | orchestrator | 2026-04-13 01:07:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:07:46.524848 | orchestrator | 2026-04-13 01:07:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:49.582570 | orchestrator | 2026-04-13 01:07:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:07:49.584415 | orchestrator | 2026-04-13 01:07:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:07:49.584608 | orchestrator | 2026-04-13 01:07:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:52.635860 | orchestrator | 2026-04-13 01:07:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:07:52.638321 | orchestrator | 2026-04-13 01:07:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:07:52.638387 | orchestrator | 2026-04-13 01:07:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:55.695222 | orchestrator | 2026-04-13 01:07:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:07:55.696211 | orchestrator | 2026-04-13 01:07:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:07:55.696244 | orchestrator | 2026-04-13 01:07:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:07:58.752184 | orchestrator | 2026-04-13 01:07:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:07:58.755830 | orchestrator | 2026-04-13 01:07:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:07:58.755899 | orchestrator | 2026-04-13 01:07:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:01.807862 | orchestrator | 2026-04-13 01:08:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:08:01.810303 | orchestrator | 2026-04-13 01:08:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:08:01.810350 | orchestrator | 2026-04-13 01:08:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:04.865362 | orchestrator | 2026-04-13 01:08:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:08:04.866747 | orchestrator | 2026-04-13 01:08:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:08:04.866793 | orchestrator | 2026-04-13 01:08:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:07.916577 | orchestrator | 2026-04-13 01:08:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:08:07.918529 | orchestrator | 2026-04-13 01:08:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:08:07.918580 | orchestrator | 2026-04-13 01:08:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:10.972723 | orchestrator | 2026-04-13 01:08:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:08:10.973930 | orchestrator | 2026-04-13 01:08:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:08:10.974006 | orchestrator | 2026-04-13 01:08:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:14.048738 | orchestrator | 2026-04-13 01:08:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:08:14.049546 | orchestrator | 2026-04-13 01:08:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:08:14.049596 | orchestrator | 2026-04-13 01:08:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:17.100788 | orchestrator | 2026-04-13 01:08:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:08:17.102241 | orchestrator | 2026-04-13 01:08:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:08:17.102788 | orchestrator | 2026-04-13 01:08:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:20.159072 | orchestrator | 2026-04-13 01:08:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:08:20.160716 | orchestrator | 2026-04-13 01:08:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:08:20.160757 | orchestrator | 2026-04-13 01:08:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:23.211043 | orchestrator | 2026-04-13 01:08:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:08:23.212637 | orchestrator | 2026-04-13 01:08:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:08:23.212683 | orchestrator | 2026-04-13 01:08:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:26.254920 | orchestrator | 2026-04-13 01:08:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:08:26.257171 | orchestrator | 2026-04-13 01:08:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:08:26.257225 | orchestrator | 2026-04-13 01:08:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:29.306484 | orchestrator | 2026-04-13 01:08:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:08:29.306858 | orchestrator | 2026-04-13 01:08:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:08:29.306879 | orchestrator | 2026-04-13 01:08:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:32.361130 | orchestrator | 2026-04-13 01:08:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:08:32.363653 | orchestrator | 2026-04-13 01:08:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:08:32.363701 | orchestrator | 2026-04-13 01:08:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:35.420597 | orchestrator | 2026-04-13 01:08:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:08:35.424354 | orchestrator | 2026-04-13 01:08:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:08:35.424404 | orchestrator | 2026-04-13 01:08:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:38.469737 | orchestrator | 2026-04-13 01:08:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:08:38.471096 | orchestrator | 2026-04-13 01:08:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:08:38.471285 | orchestrator | 2026-04-13 01:08:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:41.511870 | orchestrator | 2026-04-13 01:08:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:08:41.513288 | orchestrator | 2026-04-13 01:08:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:08:41.513344 | orchestrator | 2026-04-13 01:08:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:44.562531 | orchestrator | 2026-04-13 01:08:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:08:44.563826 | orchestrator | 2026-04-13 01:08:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:08:44.563885 | orchestrator | 2026-04-13 01:08:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:47.615514 | orchestrator | 2026-04-13 01:08:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:08:47.617813 | orchestrator | 2026-04-13 01:08:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:08:47.617859 | orchestrator | 2026-04-13 01:08:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:50.667855 | orchestrator | 2026-04-13 01:08:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:08:50.670353 | orchestrator | 2026-04-13 01:08:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:08:50.670412 | orchestrator | 2026-04-13 01:08:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:53.722594 | orchestrator | 2026-04-13 01:08:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:08:53.723476 | orchestrator | 2026-04-13 01:08:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:08:53.723546 | orchestrator | 2026-04-13 01:08:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:56.777617 | orchestrator | 2026-04-13 01:08:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:08:56.778161 | orchestrator | 2026-04-13 01:08:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:08:56.778200 | orchestrator | 2026-04-13 01:08:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:08:59.828856 | orchestrator | 2026-04-13 01:08:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:08:59.829799 | orchestrator | 2026-04-13 01:08:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:08:59.830139 | orchestrator | 2026-04-13 01:08:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:02.872233 | orchestrator | 2026-04-13 01:09:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:09:02.873584 | orchestrator | 2026-04-13 01:09:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:09:02.873638 | orchestrator | 2026-04-13 01:09:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:05.932619 | orchestrator | 2026-04-13 01:09:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:09:05.937667 | orchestrator | 2026-04-13 01:09:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:09:05.937731 | orchestrator | 2026-04-13 01:09:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:08.984412 | orchestrator | 2026-04-13 01:09:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:09:08.986150 | orchestrator | 2026-04-13 01:09:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:09:08.986180 | orchestrator | 2026-04-13 01:09:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:12.036724 | orchestrator | 2026-04-13 01:09:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:09:12.038315 | orchestrator | 2026-04-13 01:09:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:09:12.038376 | orchestrator | 2026-04-13 01:09:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:15.082798 | orchestrator | 2026-04-13 01:09:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:09:15.084140 | orchestrator | 2026-04-13 01:09:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:09:15.084197 | orchestrator | 2026-04-13 01:09:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:18.128534 | orchestrator | 2026-04-13 01:09:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:09:18.129558 | orchestrator | 2026-04-13 01:09:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:09:18.129587 | orchestrator | 2026-04-13 01:09:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:21.184237 | orchestrator | 2026-04-13 01:09:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:09:21.186886 | orchestrator | 2026-04-13 01:09:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:09:21.186975 | orchestrator | 2026-04-13 01:09:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:24.241329 | orchestrator | 2026-04-13 01:09:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:09:24.244332 | orchestrator | 2026-04-13 01:09:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:09:24.244402 | orchestrator | 2026-04-13 01:09:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:27.292972 | orchestrator | 2026-04-13 01:09:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:09:27.295293 | orchestrator | 2026-04-13 01:09:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:09:27.295554 | orchestrator | 2026-04-13 01:09:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:30.348961 | orchestrator | 2026-04-13 01:09:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:09:30.350195 | orchestrator | 2026-04-13 01:09:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:09:30.350368 | orchestrator | 2026-04-13 01:09:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:33.403748 | orchestrator | 2026-04-13 01:09:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:09:33.408082 | orchestrator | 2026-04-13 01:09:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:09:33.408158 | orchestrator | 2026-04-13 01:09:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:36.458503 | orchestrator | 2026-04-13 01:09:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:09:36.461552 | orchestrator | 2026-04-13 01:09:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:09:36.461638 | orchestrator | 2026-04-13 01:09:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:39.518817 | orchestrator | 2026-04-13 01:09:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:09:39.521289 | orchestrator | 2026-04-13 01:09:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:09:39.521362 | orchestrator | 2026-04-13 01:09:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:42.568398 | orchestrator | 2026-04-13 01:09:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:09:42.569369 | orchestrator | 2026-04-13 01:09:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:09:42.569448 | orchestrator | 2026-04-13 01:09:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:45.615233 | orchestrator | 2026-04-13 01:09:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:09:45.616405 | orchestrator | 2026-04-13 01:09:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:09:45.616503 | orchestrator | 2026-04-13 01:09:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:48.671683 | orchestrator | 2026-04-13 01:09:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:09:48.673659 | orchestrator | 2026-04-13 01:09:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:09:48.673713 | orchestrator | 2026-04-13 01:09:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:51.719308 | orchestrator | 2026-04-13 01:09:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:09:51.720927 | orchestrator | 2026-04-13 01:09:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:09:51.721007 | orchestrator | 2026-04-13 01:09:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:54.772002 | orchestrator | 2026-04-13 01:09:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:09:54.773991 | orchestrator | 2026-04-13 01:09:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:09:54.774130 | orchestrator | 2026-04-13 01:09:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:09:57.823986 | orchestrator | 2026-04-13 01:09:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:09:57.825014 | orchestrator | 2026-04-13 01:09:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:09:57.825135 | orchestrator | 2026-04-13 01:09:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:00.865657 | orchestrator | 2026-04-13 01:10:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:10:00.868151 | orchestrator | 2026-04-13 01:10:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:10:00.868250 | orchestrator | 2026-04-13 01:10:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:03.913909 | orchestrator | 2026-04-13 01:10:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:10:03.915522 | orchestrator | 2026-04-13 01:10:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:10:03.915576 | orchestrator | 2026-04-13 01:10:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:06.959366 | orchestrator | 2026-04-13 01:10:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:10:06.961026 | orchestrator | 2026-04-13 01:10:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:10:06.961100 | orchestrator | 2026-04-13 01:10:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:10.003618 | orchestrator | 2026-04-13 01:10:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:10:10.005364 | orchestrator | 2026-04-13 01:10:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:10:10.005408 | orchestrator | 2026-04-13 01:10:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:13.058606 | orchestrator | 2026-04-13 01:10:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:10:13.060838 | orchestrator | 2026-04-13 01:10:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:10:13.061002 | orchestrator | 2026-04-13 01:10:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:16.109298 | orchestrator | 2026-04-13 01:10:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:10:16.114232 | orchestrator | 2026-04-13 01:10:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:10:16.114292 | orchestrator | 2026-04-13 01:10:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:19.162578 | orchestrator | 2026-04-13 01:10:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:10:19.164079 | orchestrator | 2026-04-13 01:10:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:10:19.164132 | orchestrator | 2026-04-13 01:10:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:22.210834 | orchestrator | 2026-04-13 01:10:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:10:22.213083 | orchestrator | 2026-04-13 01:10:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:10:22.213138 | orchestrator | 2026-04-13 01:10:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:25.270607 | orchestrator | 2026-04-13 01:10:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:10:25.274254 | orchestrator | 2026-04-13 01:10:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:10:25.274360 | orchestrator | 2026-04-13 01:10:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:28.325137 | orchestrator | 2026-04-13 01:10:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:10:28.326505 | orchestrator | 2026-04-13 01:10:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:10:28.326677 | orchestrator | 2026-04-13 01:10:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:31.377346 | orchestrator | 2026-04-13 01:10:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:10:31.379709 | orchestrator | 2026-04-13 01:10:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:10:31.379785 | orchestrator | 2026-04-13 01:10:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:34.429352 | orchestrator | 2026-04-13 01:10:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:10:34.430812 | orchestrator | 2026-04-13 01:10:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:10:34.430915 | orchestrator | 2026-04-13 01:10:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:37.484805 | orchestrator | 2026-04-13 01:10:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:10:37.485624 | orchestrator | 2026-04-13 01:10:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:10:37.485647 | orchestrator | 2026-04-13 01:10:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:40.531359 | orchestrator | 2026-04-13 01:10:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:10:40.532288 | orchestrator | 2026-04-13 01:10:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:10:40.532319 | orchestrator | 2026-04-13 01:10:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:43.590278 | orchestrator | 2026-04-13 01:10:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:10:43.595015 | orchestrator | 2026-04-13 01:10:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:10:43.595131 | orchestrator | 2026-04-13 01:10:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:46.647048 | orchestrator | 2026-04-13 01:10:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:10:46.648766 | orchestrator | 2026-04-13 01:10:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:10:46.648967 | orchestrator | 2026-04-13 01:10:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:49.699446 | orchestrator | 2026-04-13 01:10:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:10:49.701155 | orchestrator | 2026-04-13 01:10:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:10:49.701207 | orchestrator | 2026-04-13 01:10:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:52.747099 | orchestrator | 2026-04-13 01:10:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:10:52.749174 | orchestrator | 2026-04-13 01:10:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:10:52.749218 | orchestrator | 2026-04-13 01:10:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:55.803976 | orchestrator | 2026-04-13 01:10:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:10:55.804704 | orchestrator | 2026-04-13 01:10:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:10:55.804751 | orchestrator | 2026-04-13 01:10:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:10:58.857928 | orchestrator | 2026-04-13 01:10:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:10:58.859981 | orchestrator | 2026-04-13 01:10:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:10:58.860037 | orchestrator | 2026-04-13 01:10:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:11:01.923748 | orchestrator | 2026-04-13 01:11:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:11:01.925567 | orchestrator | 2026-04-13 01:11:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:11:01.925711 | orchestrator | 2026-04-13 01:11:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:11:05.004321 | orchestrator | 2026-04-13 01:11:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:11:05.004386 | orchestrator | 2026-04-13 01:11:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:11:05.004392 | orchestrator | 2026-04-13 01:11:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:11:08.060137 | orchestrator | 2026-04-13 01:11:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:11:08.060235 | orchestrator | 2026-04-13 01:11:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:11:08.060250 | orchestrator | 2026-04-13 01:11:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:11:11.126620 | orchestrator | 2026-04-13 01:11:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:11:11.129446 | orchestrator | 2026-04-13 01:11:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:11:11.129598 | orchestrator | 2026-04-13 01:11:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:11:14.206612 | orchestrator | 2026-04-13 01:11:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:11:14.207538 | orchestrator | 2026-04-13 01:11:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:11:14.207580 | orchestrator | 2026-04-13 01:11:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:11:17.256268 | orchestrator | 2026-04-13 01:11:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:11:17.258849 | orchestrator | 2026-04-13 01:11:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:11:17.258917 | orchestrator | 2026-04-13 01:11:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:11:20.314255 | orchestrator | 2026-04-13 01:11:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:11:20.316098 | orchestrator | 2026-04-13 01:11:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:11:20.316163 | orchestrator | 2026-04-13 01:11:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:11:23.376625 | orchestrator | 2026-04-13 01:11:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:11:23.376722 | orchestrator | 2026-04-13 01:11:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:11:23.377438 | orchestrator | 2026-04-13 01:11:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:11:26.438881 | orchestrator | 2026-04-13 01:11:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:11:26.440661 | orchestrator | 2026-04-13 01:11:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:11:26.440711 | orchestrator | 2026-04-13 01:11:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:11:29.497327 | orchestrator | 2026-04-13 01:11:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:11:29.499307 | orchestrator | 2026-04-13 01:11:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:11:29.500052 | orchestrator | 2026-04-13 01:11:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:11:32.557027 | orchestrator | 2026-04-13 01:11:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:11:32.561211 | orchestrator | 2026-04-13 01:11:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:11:32.561277 | orchestrator | 2026-04-13 01:11:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:11:35.610173 | orchestrator | 2026-04-13 01:11:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:11:35.613714 | orchestrator | 2026-04-13 01:11:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:11:35.613830 | orchestrator | 2026-04-13 01:11:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:11:38.661378 | orchestrator | 2026-04-13 01:11:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:11:38.666192 | orchestrator | 2026-04-13 01:11:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:11:38.666276 | orchestrator | 2026-04-13 01:11:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:11:41.722588 | orchestrator | 2026-04-13 01:11:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:11:41.725707 | orchestrator | 2026-04-13 01:11:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:11:41.725852 | orchestrator | 2026-04-13 01:11:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:11:44.788170 | orchestrator | 2026-04-13 01:11:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:11:44.789915 | orchestrator | 2026-04-13 01:11:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:11:44.789970 | orchestrator | 2026-04-13 01:11:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:11:47.846946 | orchestrator | 2026-04-13 01:11:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:11:47.848835 | orchestrator | 2026-04-13 01:11:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:11:47.848890 | orchestrator | 2026-04-13 01:11:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:11:50.897677 | orchestrator | 2026-04-13 01:11:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:11:50.898060 | orchestrator | 2026-04-13 01:11:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:11:50.898128 | orchestrator | 2026-04-13 01:11:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:11:53.946265 | orchestrator | 2026-04-13 01:11:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:11:53.948624 | orchestrator | 2026-04-13 01:11:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:11:53.948670 | orchestrator | 2026-04-13 01:11:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:11:57.004403 | orchestrator | 2026-04-13 01:11:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:11:57.007132 | orchestrator | 2026-04-13 01:11:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:11:57.007200 | orchestrator | 2026-04-13 01:11:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:12:00.061086 | orchestrator | 2026-04-13 01:12:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:12:00.062984 | orchestrator | 2026-04-13 01:12:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:12:00.063021 | orchestrator | 2026-04-13 01:12:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:12:03.110321 | orchestrator | 2026-04-13 01:12:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:12:03.112200 | orchestrator | 2026-04-13 01:12:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:12:03.112311 | orchestrator | 2026-04-13 01:12:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:12:06.161495 | orchestrator | 2026-04-13 01:12:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:12:06.162992 | orchestrator | 2026-04-13 01:12:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:12:06.163045 | orchestrator | 2026-04-13 01:12:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:12:09.209551 | orchestrator | 2026-04-13 01:12:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:12:09.212406 | orchestrator | 2026-04-13 01:12:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:12:09.212491 | orchestrator | 2026-04-13 01:12:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:12:12.257911 | orchestrator | 2026-04-13 01:12:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:12:12.260419 | orchestrator | 2026-04-13 01:12:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:12:12.260547 | orchestrator | 2026-04-13 01:12:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:12:15.326700 | orchestrator | 2026-04-13 01:12:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:12:15.328074 | orchestrator | 2026-04-13 01:12:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:12:15.328116 | orchestrator | 2026-04-13 01:12:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:12:18.383505 | orchestrator | 2026-04-13 01:12:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:12:18.385958 | orchestrator | 2026-04-13 01:12:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:12:18.385992 | orchestrator | 2026-04-13 01:12:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:12:21.431401 | orchestrator | 2026-04-13 01:12:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:12:21.433833 | orchestrator | 2026-04-13 01:12:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:12:21.433902 | orchestrator | 2026-04-13 01:12:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:12:24.485327 | orchestrator | 2026-04-13 01:12:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:12:24.488726 | orchestrator | 2026-04-13 01:12:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:12:24.488933 | orchestrator | 2026-04-13 01:12:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:12:27.539442 | orchestrator | 2026-04-13 01:12:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:12:27.541903 | orchestrator | 2026-04-13 01:12:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:12:27.541981 | orchestrator | 2026-04-13 01:12:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:12:30.596105 | orchestrator | 2026-04-13 01:12:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:12:30.598294 | orchestrator | 2026-04-13 01:12:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:12:30.598335 | orchestrator | 2026-04-13 01:12:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:12:33.652703 | orchestrator | 2026-04-13 01:12:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:12:33.655015 | orchestrator | 2026-04-13 01:12:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:12:33.655080 | orchestrator | 2026-04-13 01:12:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:12:36.703839 | orchestrator | 2026-04-13 01:12:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:12:36.705813 | orchestrator | 2026-04-13 01:12:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:12:36.705969 | orchestrator | 2026-04-13 01:12:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:12:39.759135 | orchestrator | 2026-04-13 01:12:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:12:39.761294 | orchestrator | 2026-04-13 01:12:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:12:39.761473 | orchestrator | 2026-04-13 01:12:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:12:42.805848 | orchestrator | 2026-04-13 01:12:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:12:42.806849 | orchestrator | 2026-04-13 01:12:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:12:42.806889 | orchestrator | 2026-04-13 01:12:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:12:45.869982 | orchestrator | 2026-04-13 01:12:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:12:45.871509 | orchestrator | 2026-04-13 01:12:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:12:45.871840 | orchestrator | 2026-04-13 01:12:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:12:48.921544 | orchestrator | 2026-04-13 01:12:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:12:48.922781 | orchestrator | 2026-04-13 01:12:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:12:48.922835 | orchestrator | 2026-04-13 01:12:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:12:51.978004 | orchestrator | 2026-04-13 01:12:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:12:51.981375 | orchestrator | 2026-04-13 01:12:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:12:51.981444 | orchestrator | 2026-04-13 01:12:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:12:55.030624 | orchestrator | 2026-04-13 01:12:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:12:55.033118 | orchestrator | 2026-04-13 01:12:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:12:55.033178 | orchestrator | 2026-04-13 01:12:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:12:58.090412 | orchestrator | 2026-04-13 01:12:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:12:58.094881 | orchestrator | 2026-04-13 01:12:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:12:58.094965 | orchestrator | 2026-04-13 01:12:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:01.147520 | orchestrator | 2026-04-13 01:13:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:13:01.149604 | orchestrator | 2026-04-13 01:13:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:13:01.149692 | orchestrator | 2026-04-13 01:13:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:04.204521 | orchestrator | 2026-04-13 01:13:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:13:04.205871 | orchestrator | 2026-04-13 01:13:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:13:04.205905 | orchestrator | 2026-04-13 01:13:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:07.259948 | orchestrator | 2026-04-13 01:13:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:13:07.262059 | orchestrator | 2026-04-13 01:13:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:13:07.262102 | orchestrator | 2026-04-13 01:13:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:10.316385 | orchestrator | 2026-04-13 01:13:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:13:10.319464 | orchestrator | 2026-04-13 01:13:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:13:10.319527 | orchestrator | 2026-04-13 01:13:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:13.377564 | orchestrator | 2026-04-13 01:13:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:13:13.378984 | orchestrator | 2026-04-13 01:13:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:13:13.379100 | orchestrator | 2026-04-13 01:13:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:16.435280 | orchestrator | 2026-04-13 01:13:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:13:16.436885 | orchestrator | 2026-04-13 01:13:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:13:16.436926 | orchestrator | 2026-04-13 01:13:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:19.491252 | orchestrator | 2026-04-13 01:13:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:13:19.493004 | orchestrator | 2026-04-13 01:13:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:13:19.493114 | orchestrator | 2026-04-13 01:13:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:22.540923 | orchestrator | 2026-04-13 01:13:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:13:22.542422 | orchestrator | 2026-04-13 01:13:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:13:22.542468 | orchestrator | 2026-04-13 01:13:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:25.594545 | orchestrator | 2026-04-13 01:13:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:13:25.596093 | orchestrator | 2026-04-13 01:13:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:13:25.596144 | orchestrator | 2026-04-13 01:13:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:28.646532 | orchestrator | 2026-04-13 01:13:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:13:28.649261 | orchestrator | 2026-04-13 01:13:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:13:28.649507 | orchestrator | 2026-04-13 01:13:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:31.700235 | orchestrator | 2026-04-13 01:13:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:13:31.703585 | orchestrator | 2026-04-13 01:13:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:13:31.704123 | orchestrator | 2026-04-13 01:13:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:34.757920 | orchestrator | 2026-04-13 01:13:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:13:34.758003 | orchestrator | 2026-04-13 01:13:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:13:34.759410 | orchestrator | 2026-04-13 01:13:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:37.808215 | orchestrator | 2026-04-13 01:13:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:13:37.811112 | orchestrator | 2026-04-13 01:13:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:13:37.811196 | orchestrator | 2026-04-13 01:13:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:40.861609 | orchestrator | 2026-04-13 01:13:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:13:40.863354 | orchestrator | 2026-04-13 01:13:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:13:40.863439 | orchestrator | 2026-04-13 01:13:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:43.912577 | orchestrator | 2026-04-13 01:13:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:13:43.913973 | orchestrator | 2026-04-13 01:13:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:13:43.914070 | orchestrator | 2026-04-13 01:13:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:46.969170 | orchestrator | 2026-04-13 01:13:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:13:46.970827 | orchestrator | 2026-04-13 01:13:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:13:46.970905 | orchestrator | 2026-04-13 01:13:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:50.019670 | orchestrator | 2026-04-13 01:13:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:13:50.020906 | orchestrator | 2026-04-13 01:13:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:13:50.020982 | orchestrator | 2026-04-13 01:13:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:53.076942 | orchestrator | 2026-04-13 01:13:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:13:53.078370 | orchestrator | 2026-04-13 01:13:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:13:53.078469 | orchestrator | 2026-04-13 01:13:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:56.122814 | orchestrator | 2026-04-13 01:13:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:13:56.124832 | orchestrator | 2026-04-13 01:13:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:13:56.124890 | orchestrator | 2026-04-13 01:13:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:13:59.177758 | orchestrator | 2026-04-13 01:13:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:13:59.179200 | orchestrator | 2026-04-13 01:13:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:13:59.179362 | orchestrator | 2026-04-13 01:13:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:02.230639 | orchestrator | 2026-04-13 01:14:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:14:02.231452 | orchestrator | 2026-04-13 01:14:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:14:02.231506 | orchestrator | 2026-04-13 01:14:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:05.284391 | orchestrator | 2026-04-13 01:14:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:14:05.287010 | orchestrator | 2026-04-13 01:14:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:14:05.287091 | orchestrator | 2026-04-13 01:14:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:08.339864 | orchestrator | 2026-04-13 01:14:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:14:08.341347 | orchestrator | 2026-04-13 01:14:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:14:08.341389 | orchestrator | 2026-04-13 01:14:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:11.388947 | orchestrator | 2026-04-13 01:14:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:14:11.391014 | orchestrator | 2026-04-13 01:14:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:14:11.391050 | orchestrator | 2026-04-13 01:14:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:14.445181 | orchestrator | 2026-04-13 01:14:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:14:14.447426 | orchestrator | 2026-04-13 01:14:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:14:14.447470 | orchestrator | 2026-04-13 01:14:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:17.500019 | orchestrator | 2026-04-13 01:14:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:14:17.500861 | orchestrator | 2026-04-13 01:14:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:14:17.500963 | orchestrator | 2026-04-13 01:14:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:20.548445 | orchestrator | 2026-04-13 01:14:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:14:20.550228 | orchestrator | 2026-04-13 01:14:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:14:20.550326 | orchestrator | 2026-04-13 01:14:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:23.605307 | orchestrator | 2026-04-13 01:14:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:14:23.606398 | orchestrator | 2026-04-13 01:14:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:14:23.606431 | orchestrator | 2026-04-13 01:14:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:26.659931 | orchestrator | 2026-04-13 01:14:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:14:26.660186 | orchestrator | 2026-04-13 01:14:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:14:26.660211 | orchestrator | 2026-04-13 01:14:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:29.710372 | orchestrator | 2026-04-13 01:14:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:14:29.712544 | orchestrator | 2026-04-13 01:14:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:14:29.712618 | orchestrator | 2026-04-13 01:14:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:32.764930 | orchestrator | 2026-04-13 01:14:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:14:32.766476 | orchestrator | 2026-04-13 01:14:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:14:32.766559 | orchestrator | 2026-04-13 01:14:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:35.813999 | orchestrator | 2026-04-13 01:14:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:14:35.817274 | orchestrator | 2026-04-13 01:14:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:14:35.817309 | orchestrator | 2026-04-13 01:14:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:38.862949 | orchestrator | 2026-04-13 01:14:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:14:38.864121 | orchestrator | 2026-04-13 01:14:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:14:38.864154 | orchestrator | 2026-04-13 01:14:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:41.914211 | orchestrator | 2026-04-13 01:14:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:14:41.915992 | orchestrator | 2026-04-13 01:14:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:14:41.916040 | orchestrator | 2026-04-13 01:14:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:44.967581 | orchestrator | 2026-04-13 01:14:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:14:44.970938 | orchestrator | 2026-04-13 01:14:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:14:44.971038 | orchestrator | 2026-04-13 01:14:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:48.029827 | orchestrator | 2026-04-13 01:14:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:14:48.032732 | orchestrator | 2026-04-13 01:14:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:14:48.032804 | orchestrator | 2026-04-13 01:14:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:51.096298 | orchestrator | 2026-04-13 01:14:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:14:51.097871 | orchestrator | 2026-04-13 01:14:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:14:51.098097 | orchestrator | 2026-04-13 01:14:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:54.146422 | orchestrator | 2026-04-13 01:14:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:14:54.147381 | orchestrator | 2026-04-13 01:14:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:14:54.147416 | orchestrator | 2026-04-13 01:14:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:14:57.203603 | orchestrator | 2026-04-13 01:14:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:14:57.204966 | orchestrator | 2026-04-13 01:14:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:14:57.204991 | orchestrator | 2026-04-13 01:14:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:15:00.256579 | orchestrator | 2026-04-13 01:15:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:15:00.258498 | orchestrator | 2026-04-13 01:15:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:15:00.259098 | orchestrator | 2026-04-13 01:15:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:15:03.315974 | orchestrator | 2026-04-13 01:15:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:15:03.316726 | orchestrator | 2026-04-13 01:15:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:15:03.316774 | orchestrator | 2026-04-13 01:15:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:15:06.376080 | orchestrator | 2026-04-13 01:15:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:15:06.377793 | orchestrator | 2026-04-13 01:15:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:15:06.377819 | orchestrator | 2026-04-13 01:15:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:15:09.430369 | orchestrator | 2026-04-13 01:15:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:15:09.432292 | orchestrator | 2026-04-13 01:15:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:15:09.432340 | orchestrator | 2026-04-13 01:15:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:15:12.485162 | orchestrator | 2026-04-13 01:15:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:15:12.486995 | orchestrator | 2026-04-13 01:15:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:15:12.487434 | orchestrator | 2026-04-13 01:15:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:15:15.535195 | orchestrator | 2026-04-13 01:15:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:15:15.540095 | orchestrator | 2026-04-13 01:15:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:15:15.540142 | orchestrator | 2026-04-13 01:15:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:15:18.587897 | orchestrator | 2026-04-13 01:15:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:15:18.589846 | orchestrator | 2026-04-13 01:15:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:15:18.589888 | orchestrator | 2026-04-13 01:15:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:15:21.641133 | orchestrator | 2026-04-13 01:15:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:15:21.644375 | orchestrator | 2026-04-13 01:15:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:15:21.644497 | orchestrator | 2026-04-13 01:15:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:15:24.697361 | orchestrator | 2026-04-13 01:15:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:15:24.699295 | orchestrator | 2026-04-13 01:15:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:15:24.699334 | orchestrator | 2026-04-13 01:15:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:15:27.755382 | orchestrator | 2026-04-13 01:15:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:15:27.756113 | orchestrator | 2026-04-13 01:15:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:15:27.756146 | orchestrator | 2026-04-13 01:15:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:15:30.806371 | orchestrator | 2026-04-13 01:15:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:15:30.806469 | orchestrator | 2026-04-13 01:15:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:15:30.806492 | orchestrator | 2026-04-13 01:15:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:15:33.861018 | orchestrator | 2026-04-13 01:15:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:15:33.864153 | orchestrator | 2026-04-13 01:15:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:15:33.864211 | orchestrator | 2026-04-13 01:15:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:15:36.911844 | orchestrator | 2026-04-13 01:15:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:15:36.913918 | orchestrator | 2026-04-13 01:15:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:15:36.913955 | orchestrator | 2026-04-13 01:15:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:15:39.964971 | orchestrator | 2026-04-13 01:15:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:15:39.966460 | orchestrator | 2026-04-13 01:15:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:15:39.966527 | orchestrator | 2026-04-13 01:15:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:15:43.016167 | orchestrator | 2026-04-13 01:15:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:15:43.018261 | orchestrator | 2026-04-13 01:15:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:15:43.018298 | orchestrator | 2026-04-13 01:15:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:15:46.072396 | orchestrator | 2026-04-13 01:15:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:15:46.073860 | orchestrator | 2026-04-13 01:15:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:15:46.073902 | orchestrator | 2026-04-13 01:15:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:15:49.124765 | orchestrator | 2026-04-13 01:15:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:15:49.126423 | orchestrator | 2026-04-13 01:15:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:15:49.126468 | orchestrator | 2026-04-13 01:15:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:15:52.175716 | orchestrator | 2026-04-13 01:15:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:15:52.175912 | orchestrator | 2026-04-13 01:15:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:15:52.175941 | orchestrator | 2026-04-13 01:15:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:15:55.224856 | orchestrator | 2026-04-13 01:15:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:15:55.226320 | orchestrator | 2026-04-13 01:15:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:15:55.226386 | orchestrator | 2026-04-13 01:15:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:15:58.271314 | orchestrator | 2026-04-13 01:15:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:15:58.272815 | orchestrator | 2026-04-13 01:15:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:15:58.272896 | orchestrator | 2026-04-13 01:15:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:16:01.326868 | orchestrator | 2026-04-13 01:16:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:16:01.327553 | orchestrator | 2026-04-13 01:16:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:16:01.327621 | orchestrator | 2026-04-13 01:16:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:16:04.380491 | orchestrator | 2026-04-13 01:16:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:16:04.381775 | orchestrator | 2026-04-13 01:16:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:16:04.381799 | orchestrator | 2026-04-13 01:16:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:16:07.427932 | orchestrator | 2026-04-13 01:16:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:16:07.428855 | orchestrator | 2026-04-13 01:16:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:16:07.428971 | orchestrator | 2026-04-13 01:16:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:16:10.470148 | orchestrator | 2026-04-13 01:16:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:16:10.472190 | orchestrator | 2026-04-13 01:16:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:16:10.472241 | orchestrator | 2026-04-13 01:16:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:16:13.527660 | orchestrator | 2026-04-13 01:16:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:16:13.528087 | orchestrator | 2026-04-13 01:16:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:16:13.528134 | orchestrator | 2026-04-13 01:16:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:16:16.581477 | orchestrator | 2026-04-13 01:16:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:16:16.583089 | orchestrator | 2026-04-13 01:16:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:16:16.583130 | orchestrator | 2026-04-13 01:16:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:16:19.633270 | orchestrator | 2026-04-13 01:16:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:16:19.634414 | orchestrator | 2026-04-13 01:16:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:16:19.634475 | orchestrator | 2026-04-13 01:16:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:16:22.687700 | orchestrator | 2026-04-13 01:16:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:16:22.689131 | orchestrator | 2026-04-13 01:16:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:16:22.689209 | orchestrator | 2026-04-13 01:16:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:16:25.741700 | orchestrator | 2026-04-13 01:16:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:16:25.743243 | orchestrator | 2026-04-13 01:16:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:16:25.743302 | orchestrator | 2026-04-13 01:16:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:16:28.795646 | orchestrator | 2026-04-13 01:16:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:16:28.796724 | orchestrator | 2026-04-13 01:16:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:16:28.796776 | orchestrator | 2026-04-13 01:16:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:16:31.845728 | orchestrator | 2026-04-13 01:16:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:16:31.846857 | orchestrator | 2026-04-13 01:16:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:16:31.846905 | orchestrator | 2026-04-13 01:16:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:16:34.896432 | orchestrator | 2026-04-13 01:16:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:16:34.898199 | orchestrator | 2026-04-13 01:16:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:16:34.898226 | orchestrator | 2026-04-13 01:16:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:16:37.952997 | orchestrator | 2026-04-13 01:16:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:16:37.954708 | orchestrator | 2026-04-13 01:16:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:16:37.954821 | orchestrator | 2026-04-13 01:16:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:16:41.015336 | orchestrator | 2026-04-13 01:16:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:16:41.016608 | orchestrator | 2026-04-13 01:16:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:16:41.016676 | orchestrator | 2026-04-13 01:16:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:16:44.067465 | orchestrator | 2026-04-13 01:16:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:16:44.068778 | orchestrator | 2026-04-13 01:16:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:16:44.068812 | orchestrator | 2026-04-13 01:16:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:16:47.121249 | orchestrator | 2026-04-13 01:16:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:16:47.122827 | orchestrator | 2026-04-13 01:16:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:16:47.122886 | orchestrator | 2026-04-13 01:16:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:16:50.178452 | orchestrator | 2026-04-13 01:16:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:16:50.180995 | orchestrator | 2026-04-13 01:16:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:16:50.181069 | orchestrator | 2026-04-13 01:16:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:16:53.236944 | orchestrator | 2026-04-13 01:16:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:16:53.237911 | orchestrator | 2026-04-13 01:16:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:16:53.237949 | orchestrator | 2026-04-13 01:16:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:16:56.291667 | orchestrator | 2026-04-13 01:16:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:16:56.293805 | orchestrator | 2026-04-13 01:16:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:16:56.293860 | orchestrator | 2026-04-13 01:16:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:16:59.345120 | orchestrator | 2026-04-13 01:16:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:16:59.346304 | orchestrator | 2026-04-13 01:16:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:16:59.346367 | orchestrator | 2026-04-13 01:16:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:17:02.393440 | orchestrator | 2026-04-13 01:17:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:17:02.396801 | orchestrator | 2026-04-13 01:17:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:17:02.396974 | orchestrator | 2026-04-13 01:17:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:17:05.447332 | orchestrator | 2026-04-13 01:17:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:17:05.449439 | orchestrator | 2026-04-13 01:17:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:17:05.449499 | orchestrator | 2026-04-13 01:17:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:17:08.499047 | orchestrator | 2026-04-13 01:17:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:17:08.500710 | orchestrator | 2026-04-13 01:17:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:17:08.500792 | orchestrator | 2026-04-13 01:17:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:17:11.548305 | orchestrator | 2026-04-13 01:17:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:17:11.550211 | orchestrator | 2026-04-13 01:17:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:17:11.550273 | orchestrator | 2026-04-13 01:17:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:17:14.605012 | orchestrator | 2026-04-13 01:17:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:17:14.606815 | orchestrator | 2026-04-13 01:17:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:17:14.606963 | orchestrator | 2026-04-13 01:17:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:17:17.658445 | orchestrator | 2026-04-13 01:17:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:17:17.660555 | orchestrator | 2026-04-13 01:17:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:17:17.660595 | orchestrator | 2026-04-13 01:17:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:17:20.713186 | orchestrator | 2026-04-13 01:17:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:17:20.715846 | orchestrator | 2026-04-13 01:17:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:17:20.715903 | orchestrator | 2026-04-13 01:17:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:17:23.775985 | orchestrator | 2026-04-13 01:17:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:17:23.777934 | orchestrator | 2026-04-13 01:17:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:17:23.777972 | orchestrator | 2026-04-13 01:17:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:17:26.828246 | orchestrator | 2026-04-13 01:17:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:17:26.831164 | orchestrator | 2026-04-13 01:17:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:17:26.831899 | orchestrator | 2026-04-13 01:17:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:17:29.884895 | orchestrator | 2026-04-13 01:17:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:17:29.886763 | orchestrator | 2026-04-13 01:17:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:17:29.886802 | orchestrator | 2026-04-13 01:17:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:17:32.939132 | orchestrator | 2026-04-13 01:17:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:17:32.940313 | orchestrator | 2026-04-13 01:17:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:17:32.940389 | orchestrator | 2026-04-13 01:17:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:17:35.993133 | orchestrator | 2026-04-13 01:17:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:17:35.994275 | orchestrator | 2026-04-13 01:17:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:17:35.994345 | orchestrator | 2026-04-13 01:17:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:17:39.066201 | orchestrator | 2026-04-13 01:17:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:17:39.071927 | orchestrator | 2026-04-13 01:17:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:17:39.071991 | orchestrator | 2026-04-13 01:17:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:17:42.122665 | orchestrator | 2026-04-13 01:17:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:17:42.124235 | orchestrator | 2026-04-13 01:17:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:17:42.124288 | orchestrator | 2026-04-13 01:17:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:17:45.175930 | orchestrator | 2026-04-13 01:17:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:17:45.176906 | orchestrator | 2026-04-13 01:17:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:17:45.177357 | orchestrator | 2026-04-13 01:17:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:17:48.218234 | orchestrator | 2026-04-13 01:17:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:17:48.220126 | orchestrator | 2026-04-13 01:17:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:17:48.220166 | orchestrator | 2026-04-13 01:17:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:17:51.278295 | orchestrator | 2026-04-13 01:17:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:17:51.280387 | orchestrator | 2026-04-13 01:17:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:17:51.280437 | orchestrator | 2026-04-13 01:17:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:17:54.331324 | orchestrator | 2026-04-13 01:17:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:17:54.333336 | orchestrator | 2026-04-13 01:17:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:17:54.333397 | orchestrator | 2026-04-13 01:17:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:17:57.383106 | orchestrator | 2026-04-13 01:17:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:17:57.384802 | orchestrator | 2026-04-13 01:17:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:17:57.384863 | orchestrator | 2026-04-13 01:17:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:18:00.436989 | orchestrator | 2026-04-13 01:18:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:18:00.437427 | orchestrator | 2026-04-13 01:18:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:18:00.437460 | orchestrator | 2026-04-13 01:18:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:18:03.497621 | orchestrator | 2026-04-13 01:18:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:18:03.499648 | orchestrator | 2026-04-13 01:18:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:18:03.499706 | orchestrator | 2026-04-13 01:18:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:18:06.555004 | orchestrator | 2026-04-13 01:18:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:18:06.556994 | orchestrator | 2026-04-13 01:18:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:18:06.557075 | orchestrator | 2026-04-13 01:18:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:18:09.608169 | orchestrator | 2026-04-13 01:18:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:18:09.610865 | orchestrator | 2026-04-13 01:18:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:18:09.611278 | orchestrator | 2026-04-13 01:18:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:18:12.660669 | orchestrator | 2026-04-13 01:18:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:18:12.662765 | orchestrator | 2026-04-13 01:18:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:18:12.662825 | orchestrator | 2026-04-13 01:18:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:18:15.716515 | orchestrator | 2026-04-13 01:18:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:18:15.718779 | orchestrator | 2026-04-13 01:18:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:18:15.718889 | orchestrator | 2026-04-13 01:18:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:18:18.762261 | orchestrator | 2026-04-13 01:18:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:18:18.763348 | orchestrator | 2026-04-13 01:18:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:18:18.763368 | orchestrator | 2026-04-13 01:18:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:18:21.814797 | orchestrator | 2026-04-13 01:18:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:18:21.816642 | orchestrator | 2026-04-13 01:18:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:18:21.816693 | orchestrator | 2026-04-13 01:18:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:18:24.866978 | orchestrator | 2026-04-13 01:18:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:18:24.867511 | orchestrator | 2026-04-13 01:18:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:18:24.867547 | orchestrator | 2026-04-13 01:18:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:18:27.917152 | orchestrator | 2026-04-13 01:18:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:18:27.918821 | orchestrator | 2026-04-13 01:18:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:18:27.918863 | orchestrator | 2026-04-13 01:18:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:18:30.977819 | orchestrator | 2026-04-13 01:18:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:18:30.979502 | orchestrator | 2026-04-13 01:18:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:18:30.979538 | orchestrator | 2026-04-13 01:18:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:18:34.029135 | orchestrator | 2026-04-13 01:18:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:18:34.031627 | orchestrator | 2026-04-13 01:18:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:18:34.031738 | orchestrator | 2026-04-13 01:18:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:18:37.085103 | orchestrator | 2026-04-13 01:18:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:18:37.086532 | orchestrator | 2026-04-13 01:18:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:18:37.086581 | orchestrator | 2026-04-13 01:18:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:18:40.149808 | orchestrator | 2026-04-13 01:18:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:18:40.153816 | orchestrator | 2026-04-13 01:18:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:18:40.153898 | orchestrator | 2026-04-13 01:18:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:18:43.201604 | orchestrator | 2026-04-13 01:18:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:18:43.203814 | orchestrator | 2026-04-13 01:18:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:18:43.203832 | orchestrator | 2026-04-13 01:18:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:18:46.258641 | orchestrator | 2026-04-13 01:18:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:18:46.260300 | orchestrator | 2026-04-13 01:18:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:18:46.260363 | orchestrator | 2026-04-13 01:18:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:18:49.312077 | orchestrator | 2026-04-13 01:18:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:18:49.314556 | orchestrator | 2026-04-13 01:18:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:18:49.314654 | orchestrator | 2026-04-13 01:18:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:18:52.370858 | orchestrator | 2026-04-13 01:18:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:18:52.373593 | orchestrator | 2026-04-13 01:18:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:18:52.373694 | orchestrator | 2026-04-13 01:18:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:18:55.423149 | orchestrator | 2026-04-13 01:18:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:18:55.425271 | orchestrator | 2026-04-13 01:18:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:18:55.425467 | orchestrator | 2026-04-13 01:18:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:18:58.470191 | orchestrator | 2026-04-13 01:18:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:18:58.472107 | orchestrator | 2026-04-13 01:18:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:18:58.472150 | orchestrator | 2026-04-13 01:18:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:19:01.521779 | orchestrator | 2026-04-13 01:19:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:19:01.523332 | orchestrator | 2026-04-13 01:19:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:19:01.523402 | orchestrator | 2026-04-13 01:19:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:19:04.579299 | orchestrator | 2026-04-13 01:19:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:19:04.580987 | orchestrator | 2026-04-13 01:19:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:19:04.581130 | orchestrator | 2026-04-13 01:19:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:19:07.629706 | orchestrator | 2026-04-13 01:19:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:19:07.630917 | orchestrator | 2026-04-13 01:19:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:19:07.630934 | orchestrator | 2026-04-13 01:19:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:19:10.686147 | orchestrator | 2026-04-13 01:19:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:19:10.688085 | orchestrator | 2026-04-13 01:19:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:19:10.688118 | orchestrator | 2026-04-13 01:19:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:19:13.742254 | orchestrator | 2026-04-13 01:19:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:19:13.744816 | orchestrator | 2026-04-13 01:19:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:19:13.744876 | orchestrator | 2026-04-13 01:19:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:19:16.803213 | orchestrator | 2026-04-13 01:19:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:19:16.805781 | orchestrator | 2026-04-13 01:19:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:19:16.805862 | orchestrator | 2026-04-13 01:19:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:19:19.847710 | orchestrator | 2026-04-13 01:19:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:19:19.849036 | orchestrator | 2026-04-13 01:19:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:19:19.849119 | orchestrator | 2026-04-13 01:19:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:19:22.901522 | orchestrator | 2026-04-13 01:19:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:19:22.902636 | orchestrator | 2026-04-13 01:19:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:19:22.902722 | orchestrator | 2026-04-13 01:19:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:19:25.962604 | orchestrator | 2026-04-13 01:19:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:19:25.963737 | orchestrator | 2026-04-13 01:19:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:19:25.963804 | orchestrator | 2026-04-13 01:19:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:19:29.029654 | orchestrator | 2026-04-13 01:19:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:19:29.031413 | orchestrator | 2026-04-13 01:19:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:19:29.031517 | orchestrator | 2026-04-13 01:19:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:19:32.083934 | orchestrator | 2026-04-13 01:19:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:19:32.084038 | orchestrator | 2026-04-13 01:19:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:19:32.084053 | orchestrator | 2026-04-13 01:19:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:19:35.138733 | orchestrator | 2026-04-13 01:19:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:19:35.139400 | orchestrator | 2026-04-13 01:19:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:19:35.139485 | orchestrator | 2026-04-13 01:19:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:19:38.196347 | orchestrator | 2026-04-13 01:19:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:19:38.197959 | orchestrator | 2026-04-13 01:19:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:19:38.197985 | orchestrator | 2026-04-13 01:19:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:19:41.251918 | orchestrator | 2026-04-13 01:19:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:19:41.253970 | orchestrator | 2026-04-13 01:19:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:19:41.253997 | orchestrator | 2026-04-13 01:19:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:19:44.309736 | orchestrator | 2026-04-13 01:19:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:19:44.310705 | orchestrator | 2026-04-13 01:19:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:19:44.310855 | orchestrator | 2026-04-13 01:19:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:19:47.366130 | orchestrator | 2026-04-13 01:19:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:19:47.367045 | orchestrator | 2026-04-13 01:19:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:19:47.367081 | orchestrator | 2026-04-13 01:19:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:21:50.525016 | orchestrator | 2026-04-13 01:21:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:21:50.525103 | orchestrator | 2026-04-13 01:21:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:21:50.525113 | orchestrator | 2026-04-13 01:21:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:21:53.571134 | orchestrator | 2026-04-13 01:21:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:21:53.571770 | orchestrator | 2026-04-13 01:21:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:21:53.571805 | orchestrator | 2026-04-13 01:21:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:21:56.619199 | orchestrator | 2026-04-13 01:21:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:21:56.620539 | orchestrator | 2026-04-13 01:21:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:21:56.620575 | orchestrator | 2026-04-13 01:21:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:21:59.668604 | orchestrator | 2026-04-13 01:21:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:21:59.670296 | orchestrator | 2026-04-13 01:21:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:21:59.670363 | orchestrator | 2026-04-13 01:21:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:22:02.719787 | orchestrator | 2026-04-13 01:22:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:22:02.722236 | orchestrator | 2026-04-13 01:22:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:22:02.722307 | orchestrator | 2026-04-13 01:22:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:22:05.780199 | orchestrator | 2026-04-13 01:22:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:22:05.784248 | orchestrator | 2026-04-13 01:22:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:22:05.784358 | orchestrator | 2026-04-13 01:22:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:22:08.834605 | orchestrator | 2026-04-13 01:22:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:22:08.836177 | orchestrator | 2026-04-13 01:22:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:22:08.836386 | orchestrator | 2026-04-13 01:22:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:22:11.886812 | orchestrator | 2026-04-13 01:22:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:22:11.888857 | orchestrator | 2026-04-13 01:22:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:22:11.888907 | orchestrator | 2026-04-13 01:22:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:22:14.941953 | orchestrator | 2026-04-13 01:22:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:22:14.946086 | orchestrator | 2026-04-13 01:22:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:22:14.946142 | orchestrator | 2026-04-13 01:22:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:22:17.989163 | orchestrator | 2026-04-13 01:22:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:22:17.990933 | orchestrator | 2026-04-13 01:22:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:22:17.990973 | orchestrator | 2026-04-13 01:22:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:22:21.046387 | orchestrator | 2026-04-13 01:22:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:22:21.047210 | orchestrator | 2026-04-13 01:22:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:22:21.047236 | orchestrator | 2026-04-13 01:22:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:22:24.091263 | orchestrator | 2026-04-13 01:22:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:22:24.093131 | orchestrator | 2026-04-13 01:22:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:22:24.093161 | orchestrator | 2026-04-13 01:22:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:22:27.147387 | orchestrator | 2026-04-13 01:22:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:22:27.149936 | orchestrator | 2026-04-13 01:22:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:22:27.150252 | orchestrator | 2026-04-13 01:22:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:22:30.206315 | orchestrator | 2026-04-13 01:22:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:22:30.207683 | orchestrator | 2026-04-13 01:22:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:22:30.207783 | orchestrator | 2026-04-13 01:22:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:22:33.253895 | orchestrator | 2026-04-13 01:22:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:22:33.255950 | orchestrator | 2026-04-13 01:22:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:22:33.256103 | orchestrator | 2026-04-13 01:22:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:22:36.309385 | orchestrator | 2026-04-13 01:22:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:22:36.311075 | orchestrator | 2026-04-13 01:22:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:22:36.311155 | orchestrator | 2026-04-13 01:22:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:22:39.357210 | orchestrator | 2026-04-13 01:22:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:22:39.358575 | orchestrator | 2026-04-13 01:22:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:22:39.358629 | orchestrator | 2026-04-13 01:22:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:22:42.408493 | orchestrator | 2026-04-13 01:22:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:22:42.410641 | orchestrator | 2026-04-13 01:22:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:22:42.410723 | orchestrator | 2026-04-13 01:22:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:22:45.463555 | orchestrator | 2026-04-13 01:22:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:22:45.465294 | orchestrator | 2026-04-13 01:22:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:22:45.465365 | orchestrator | 2026-04-13 01:22:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:22:48.509294 | orchestrator | 2026-04-13 01:22:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:22:48.510050 | orchestrator | 2026-04-13 01:22:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:22:48.510065 | orchestrator | 2026-04-13 01:22:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:22:51.554233 | orchestrator | 2026-04-13 01:22:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:22:51.556017 | orchestrator | 2026-04-13 01:22:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:22:51.556075 | orchestrator | 2026-04-13 01:22:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:22:54.603438 | orchestrator | 2026-04-13 01:22:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:22:54.605990 | orchestrator | 2026-04-13 01:22:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:22:54.606091 | orchestrator | 2026-04-13 01:22:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:22:57.658631 | orchestrator | 2026-04-13 01:22:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:22:57.661615 | orchestrator | 2026-04-13 01:22:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:22:57.661661 | orchestrator | 2026-04-13 01:22:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:23:00.712015 | orchestrator | 2026-04-13 01:23:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:23:00.713708 | orchestrator | 2026-04-13 01:23:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:23:00.713754 | orchestrator | 2026-04-13 01:23:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:23:03.766275 | orchestrator | 2026-04-13 01:23:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:23:03.768932 | orchestrator | 2026-04-13 01:23:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:23:03.769003 | orchestrator | 2026-04-13 01:23:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:23:06.814842 | orchestrator | 2026-04-13 01:23:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:23:06.816841 | orchestrator | 2026-04-13 01:23:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:23:06.816898 | orchestrator | 2026-04-13 01:23:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:23:09.863807 | orchestrator | 2026-04-13 01:23:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:23:09.865289 | orchestrator | 2026-04-13 01:23:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:23:09.865343 | orchestrator | 2026-04-13 01:23:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:23:12.911212 | orchestrator | 2026-04-13 01:23:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:23:12.912956 | orchestrator | 2026-04-13 01:23:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:23:12.913022 | orchestrator | 2026-04-13 01:23:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:23:15.961101 | orchestrator | 2026-04-13 01:23:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:23:15.962112 | orchestrator | 2026-04-13 01:23:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:23:15.962191 | orchestrator | 2026-04-13 01:23:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:23:19.009118 | orchestrator | 2026-04-13 01:23:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:23:19.012093 | orchestrator | 2026-04-13 01:23:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:23:19.012180 | orchestrator | 2026-04-13 01:23:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:23:22.058963 | orchestrator | 2026-04-13 01:23:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:23:22.060214 | orchestrator | 2026-04-13 01:23:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:23:22.060795 | orchestrator | 2026-04-13 01:23:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:23:25.101401 | orchestrator | 2026-04-13 01:23:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:23:25.104306 | orchestrator | 2026-04-13 01:23:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:23:25.104362 | orchestrator | 2026-04-13 01:23:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:23:28.149107 | orchestrator | 2026-04-13 01:23:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:23:28.151039 | orchestrator | 2026-04-13 01:23:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:23:28.151088 | orchestrator | 2026-04-13 01:23:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:23:31.193281 | orchestrator | 2026-04-13 01:23:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:23:31.195086 | orchestrator | 2026-04-13 01:23:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:23:31.195136 | orchestrator | 2026-04-13 01:23:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:23:34.245902 | orchestrator | 2026-04-13 01:23:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:23:34.247376 | orchestrator | 2026-04-13 01:23:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:23:34.247493 | orchestrator | 2026-04-13 01:23:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:23:37.287447 | orchestrator | 2026-04-13 01:23:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:23:37.289177 | orchestrator | 2026-04-13 01:23:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:23:37.289280 | orchestrator | 2026-04-13 01:23:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:23:40.337092 | orchestrator | 2026-04-13 01:23:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:23:40.338418 | orchestrator | 2026-04-13 01:23:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:23:40.338455 | orchestrator | 2026-04-13 01:23:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:23:43.384213 | orchestrator | 2026-04-13 01:23:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:23:43.386164 | orchestrator | 2026-04-13 01:23:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:23:43.386234 | orchestrator | 2026-04-13 01:23:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:23:46.441587 | orchestrator | 2026-04-13 01:23:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:23:46.442930 | orchestrator | 2026-04-13 01:23:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:23:46.442992 | orchestrator | 2026-04-13 01:23:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:23:49.489923 | orchestrator | 2026-04-13 01:23:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:23:49.491440 | orchestrator | 2026-04-13 01:23:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:23:49.491491 | orchestrator | 2026-04-13 01:23:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:23:52.542511 | orchestrator | 2026-04-13 01:23:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:23:52.545494 | orchestrator | 2026-04-13 01:23:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:23:52.545590 | orchestrator | 2026-04-13 01:23:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:23:55.597534 | orchestrator | 2026-04-13 01:23:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:23:55.600623 | orchestrator | 2026-04-13 01:23:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:23:55.600680 | orchestrator | 2026-04-13 01:23:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:23:58.649257 | orchestrator | 2026-04-13 01:23:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:23:58.651987 | orchestrator | 2026-04-13 01:23:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:23:58.653088 | orchestrator | 2026-04-13 01:23:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:24:01.714184 | orchestrator | 2026-04-13 01:24:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:24:01.715837 | orchestrator | 2026-04-13 01:24:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:24:01.715877 | orchestrator | 2026-04-13 01:24:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:24:04.754844 | orchestrator | 2026-04-13 01:24:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:24:04.758332 | orchestrator | 2026-04-13 01:24:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:24:04.758559 | orchestrator | 2026-04-13 01:24:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:24:07.823753 | orchestrator | 2026-04-13 01:24:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:24:07.825528 | orchestrator | 2026-04-13 01:24:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:24:07.825647 | orchestrator | 2026-04-13 01:24:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:24:10.876806 | orchestrator | 2026-04-13 01:24:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:24:10.879160 | orchestrator | 2026-04-13 01:24:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:24:10.879257 | orchestrator | 2026-04-13 01:24:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:24:13.927289 | orchestrator | 2026-04-13 01:24:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:24:13.929059 | orchestrator | 2026-04-13 01:24:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:24:13.929098 | orchestrator | 2026-04-13 01:24:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:24:16.977456 | orchestrator | 2026-04-13 01:24:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:24:16.979603 | orchestrator | 2026-04-13 01:24:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:24:16.979660 | orchestrator | 2026-04-13 01:24:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:24:20.029234 | orchestrator | 2026-04-13 01:24:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:24:20.030722 | orchestrator | 2026-04-13 01:24:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:24:20.030769 | orchestrator | 2026-04-13 01:24:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:24:23.073493 | orchestrator | 2026-04-13 01:24:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:24:23.075140 | orchestrator | 2026-04-13 01:24:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:24:23.075192 | orchestrator | 2026-04-13 01:24:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:24:26.118402 | orchestrator | 2026-04-13 01:24:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:24:26.120072 | orchestrator | 2026-04-13 01:24:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:24:26.120094 | orchestrator | 2026-04-13 01:24:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:24:29.171710 | orchestrator | 2026-04-13 01:24:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:24:29.173184 | orchestrator | 2026-04-13 01:24:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:24:29.173226 | orchestrator | 2026-04-13 01:24:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:24:32.225964 | orchestrator | 2026-04-13 01:24:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:24:32.228080 | orchestrator | 2026-04-13 01:24:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:24:32.228144 | orchestrator | 2026-04-13 01:24:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:24:35.279989 | orchestrator | 2026-04-13 01:24:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:24:35.282684 | orchestrator | 2026-04-13 01:24:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:24:35.282805 | orchestrator | 2026-04-13 01:24:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:24:38.334214 | orchestrator | 2026-04-13 01:24:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:24:38.336978 | orchestrator | 2026-04-13 01:24:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:24:38.337050 | orchestrator | 2026-04-13 01:24:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:24:41.382718 | orchestrator | 2026-04-13 01:24:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:24:41.386803 | orchestrator | 2026-04-13 01:24:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:24:41.386973 | orchestrator | 2026-04-13 01:24:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:24:44.441278 | orchestrator | 2026-04-13 01:24:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:24:44.444197 | orchestrator | 2026-04-13 01:24:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:24:44.444248 | orchestrator | 2026-04-13 01:24:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:24:47.501075 | orchestrator | 2026-04-13 01:24:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:24:47.502992 | orchestrator | 2026-04-13 01:24:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:24:47.503102 | orchestrator | 2026-04-13 01:24:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:24:50.547499 | orchestrator | 2026-04-13 01:24:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:24:50.550310 | orchestrator | 2026-04-13 01:24:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:24:50.550373 | orchestrator | 2026-04-13 01:24:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:24:53.596248 | orchestrator | 2026-04-13 01:24:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:24:53.598196 | orchestrator | 2026-04-13 01:24:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:24:53.598274 | orchestrator | 2026-04-13 01:24:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:24:56.648110 | orchestrator | 2026-04-13 01:24:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:24:56.649715 | orchestrator | 2026-04-13 01:24:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:24:56.649760 | orchestrator | 2026-04-13 01:24:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:24:59.697574 | orchestrator | 2026-04-13 01:24:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:24:59.699701 | orchestrator | 2026-04-13 01:24:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:24:59.699843 | orchestrator | 2026-04-13 01:24:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:25:02.750813 | orchestrator | 2026-04-13 01:25:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:25:02.753099 | orchestrator | 2026-04-13 01:25:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:25:02.753237 | orchestrator | 2026-04-13 01:25:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:25:05.803565 | orchestrator | 2026-04-13 01:25:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:25:05.805641 | orchestrator | 2026-04-13 01:25:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:25:05.805681 | orchestrator | 2026-04-13 01:25:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:25:08.849916 | orchestrator | 2026-04-13 01:25:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:25:08.851281 | orchestrator | 2026-04-13 01:25:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:25:08.851385 | orchestrator | 2026-04-13 01:25:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:25:11.899374 | orchestrator | 2026-04-13 01:25:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:25:11.901185 | orchestrator | 2026-04-13 01:25:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:25:11.901218 | orchestrator | 2026-04-13 01:25:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:25:14.957524 | orchestrator | 2026-04-13 01:25:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:25:14.959614 | orchestrator | 2026-04-13 01:25:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:25:14.959654 | orchestrator | 2026-04-13 01:25:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:25:18.016958 | orchestrator | 2026-04-13 01:25:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:25:18.018923 | orchestrator | 2026-04-13 01:25:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:25:18.019325 | orchestrator | 2026-04-13 01:25:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:25:21.066871 | orchestrator | 2026-04-13 01:25:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:25:21.067695 | orchestrator | 2026-04-13 01:25:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:25:21.067744 | orchestrator | 2026-04-13 01:25:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:25:24.121454 | orchestrator | 2026-04-13 01:25:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:25:24.123311 | orchestrator | 2026-04-13 01:25:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:25:24.123363 | orchestrator | 2026-04-13 01:25:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:25:27.169241 | orchestrator | 2026-04-13 01:25:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:25:27.170090 | orchestrator | 2026-04-13 01:25:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:25:27.170133 | orchestrator | 2026-04-13 01:25:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:25:30.216012 | orchestrator | 2026-04-13 01:25:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:25:30.217967 | orchestrator | 2026-04-13 01:25:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:25:30.218186 | orchestrator | 2026-04-13 01:25:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:25:33.272010 | orchestrator | 2026-04-13 01:25:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:25:33.273001 | orchestrator | 2026-04-13 01:25:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:25:33.273048 | orchestrator | 2026-04-13 01:25:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:25:36.318237 | orchestrator | 2026-04-13 01:25:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:25:36.318500 | orchestrator | 2026-04-13 01:25:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:25:36.318988 | orchestrator | 2026-04-13 01:25:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:25:39.369909 | orchestrator | 2026-04-13 01:25:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:25:39.372263 | orchestrator | 2026-04-13 01:25:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:25:39.372335 | orchestrator | 2026-04-13 01:25:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:25:42.422321 | orchestrator | 2026-04-13 01:25:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:25:42.424286 | orchestrator | 2026-04-13 01:25:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:25:42.424337 | orchestrator | 2026-04-13 01:25:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:25:45.480513 | orchestrator | 2026-04-13 01:25:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:25:45.482469 | orchestrator | 2026-04-13 01:25:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:25:45.482515 | orchestrator | 2026-04-13 01:25:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:25:48.535448 | orchestrator | 2026-04-13 01:25:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:25:48.536984 | orchestrator | 2026-04-13 01:25:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:25:48.537041 | orchestrator | 2026-04-13 01:25:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:25:51.578244 | orchestrator | 2026-04-13 01:25:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:25:51.579458 | orchestrator | 2026-04-13 01:25:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:25:51.579506 | orchestrator | 2026-04-13 01:25:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:25:54.627022 | orchestrator | 2026-04-13 01:25:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:25:54.629579 | orchestrator | 2026-04-13 01:25:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:25:54.629653 | orchestrator | 2026-04-13 01:25:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:25:57.676480 | orchestrator | 2026-04-13 01:25:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:25:57.679798 | orchestrator | 2026-04-13 01:25:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:25:57.679851 | orchestrator | 2026-04-13 01:25:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:26:00.725094 | orchestrator | 2026-04-13 01:26:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:26:00.726673 | orchestrator | 2026-04-13 01:26:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:26:00.726729 | orchestrator | 2026-04-13 01:26:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:26:03.777687 | orchestrator | 2026-04-13 01:26:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:26:03.779421 | orchestrator | 2026-04-13 01:26:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:26:03.779506 | orchestrator | 2026-04-13 01:26:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:26:06.826814 | orchestrator | 2026-04-13 01:26:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:26:06.830978 | orchestrator | 2026-04-13 01:26:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:26:06.831049 | orchestrator | 2026-04-13 01:26:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:26:09.881393 | orchestrator | 2026-04-13 01:26:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:26:09.883303 | orchestrator | 2026-04-13 01:26:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:26:09.883366 | orchestrator | 2026-04-13 01:26:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:26:12.929338 | orchestrator | 2026-04-13 01:26:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:26:12.930774 | orchestrator | 2026-04-13 01:26:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:26:12.930844 | orchestrator | 2026-04-13 01:26:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:26:15.983855 | orchestrator | 2026-04-13 01:26:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:26:15.983934 | orchestrator | 2026-04-13 01:26:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:26:15.983944 | orchestrator | 2026-04-13 01:26:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:26:19.036449 | orchestrator | 2026-04-13 01:26:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:26:19.038129 | orchestrator | 2026-04-13 01:26:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:26:19.038249 | orchestrator | 2026-04-13 01:26:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:26:22.082881 | orchestrator | 2026-04-13 01:26:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:26:22.084389 | orchestrator | 2026-04-13 01:26:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:26:22.084438 | orchestrator | 2026-04-13 01:26:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:26:25.126373 | orchestrator | 2026-04-13 01:26:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:26:25.128388 | orchestrator | 2026-04-13 01:26:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:26:25.128569 | orchestrator | 2026-04-13 01:26:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:26:28.178136 | orchestrator | 2026-04-13 01:26:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:26:28.180643 | orchestrator | 2026-04-13 01:26:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:26:28.180735 | orchestrator | 2026-04-13 01:26:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:26:31.230749 | orchestrator | 2026-04-13 01:26:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:26:31.232887 | orchestrator | 2026-04-13 01:26:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:26:31.233111 | orchestrator | 2026-04-13 01:26:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:26:34.291525 | orchestrator | 2026-04-13 01:26:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:26:34.292382 | orchestrator | 2026-04-13 01:26:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:26:34.292418 | orchestrator | 2026-04-13 01:26:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:26:37.338674 | orchestrator | 2026-04-13 01:26:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:26:37.341113 | orchestrator | 2026-04-13 01:26:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:26:37.341152 | orchestrator | 2026-04-13 01:26:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:26:40.389334 | orchestrator | 2026-04-13 01:26:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:26:40.392166 | orchestrator | 2026-04-13 01:26:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:26:40.392298 | orchestrator | 2026-04-13 01:26:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:26:43.440725 | orchestrator | 2026-04-13 01:26:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:26:43.441685 | orchestrator | 2026-04-13 01:26:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:26:43.441805 | orchestrator | 2026-04-13 01:26:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:26:46.497750 | orchestrator | 2026-04-13 01:26:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:26:46.498834 | orchestrator | 2026-04-13 01:26:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:26:46.498875 | orchestrator | 2026-04-13 01:26:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:26:49.549055 | orchestrator | 2026-04-13 01:26:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:26:49.551155 | orchestrator | 2026-04-13 01:26:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:26:49.551277 | orchestrator | 2026-04-13 01:26:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:26:52.603395 | orchestrator | 2026-04-13 01:26:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:26:52.605456 | orchestrator | 2026-04-13 01:26:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:26:52.605513 | orchestrator | 2026-04-13 01:26:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:26:55.658407 | orchestrator | 2026-04-13 01:26:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:26:55.661677 | orchestrator | 2026-04-13 01:26:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:26:55.661758 | orchestrator | 2026-04-13 01:26:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:26:58.716499 | orchestrator | 2026-04-13 01:26:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:26:58.718087 | orchestrator | 2026-04-13 01:26:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:26:58.718186 | orchestrator | 2026-04-13 01:26:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:27:01.768996 | orchestrator | 2026-04-13 01:27:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:27:01.771438 | orchestrator | 2026-04-13 01:27:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:27:01.771508 | orchestrator | 2026-04-13 01:27:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:27:04.824711 | orchestrator | 2026-04-13 01:27:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:27:04.826301 | orchestrator | 2026-04-13 01:27:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:27:04.826375 | orchestrator | 2026-04-13 01:27:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:27:07.886598 | orchestrator | 2026-04-13 01:27:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:27:07.889362 | orchestrator | 2026-04-13 01:27:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:27:07.889427 | orchestrator | 2026-04-13 01:27:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:27:10.936714 | orchestrator | 2026-04-13 01:27:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:27:10.938527 | orchestrator | 2026-04-13 01:27:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:27:10.938582 | orchestrator | 2026-04-13 01:27:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:27:13.994545 | orchestrator | 2026-04-13 01:27:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:27:13.996576 | orchestrator | 2026-04-13 01:27:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:27:13.996645 | orchestrator | 2026-04-13 01:27:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:27:17.055522 | orchestrator | 2026-04-13 01:27:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:27:17.055715 | orchestrator | 2026-04-13 01:27:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:27:17.055735 | orchestrator | 2026-04-13 01:27:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:27:20.111224 | orchestrator | 2026-04-13 01:27:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:27:20.113058 | orchestrator | 2026-04-13 01:27:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:27:20.113105 | orchestrator | 2026-04-13 01:27:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:27:23.167817 | orchestrator | 2026-04-13 01:27:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:27:23.169392 | orchestrator | 2026-04-13 01:27:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:27:23.169422 | orchestrator | 2026-04-13 01:27:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:27:26.224835 | orchestrator | 2026-04-13 01:27:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:27:26.228615 | orchestrator | 2026-04-13 01:27:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:27:26.228686 | orchestrator | 2026-04-13 01:27:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:27:29.282197 | orchestrator | 2026-04-13 01:27:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:27:29.286877 | orchestrator | 2026-04-13 01:27:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:27:29.286944 | orchestrator | 2026-04-13 01:27:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:27:32.335882 | orchestrator | 2026-04-13 01:27:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:27:32.337116 | orchestrator | 2026-04-13 01:27:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:27:32.337173 | orchestrator | 2026-04-13 01:27:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:27:35.394926 | orchestrator | 2026-04-13 01:27:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:27:35.396163 | orchestrator | 2026-04-13 01:27:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:27:35.396726 | orchestrator | 2026-04-13 01:27:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:27:38.444875 | orchestrator | 2026-04-13 01:27:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:27:38.446579 | orchestrator | 2026-04-13 01:27:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:27:38.446629 | orchestrator | 2026-04-13 01:27:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:27:41.494536 | orchestrator | 2026-04-13 01:27:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:27:41.495935 | orchestrator | 2026-04-13 01:27:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:27:41.495991 | orchestrator | 2026-04-13 01:27:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:27:44.551786 | orchestrator | 2026-04-13 01:27:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:27:44.553699 | orchestrator | 2026-04-13 01:27:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:27:44.553739 | orchestrator | 2026-04-13 01:27:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:27:47.609079 | orchestrator | 2026-04-13 01:27:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:27:47.610738 | orchestrator | 2026-04-13 01:27:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:27:47.610831 | orchestrator | 2026-04-13 01:27:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:27:50.658900 | orchestrator | 2026-04-13 01:27:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:27:50.659663 | orchestrator | 2026-04-13 01:27:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:27:50.659698 | orchestrator | 2026-04-13 01:27:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:27:53.710615 | orchestrator | 2026-04-13 01:27:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:27:53.714909 | orchestrator | 2026-04-13 01:27:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:27:53.715400 | orchestrator | 2026-04-13 01:27:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:27:56.764165 | orchestrator | 2026-04-13 01:27:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:27:56.764774 | orchestrator | 2026-04-13 01:27:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:27:56.764802 | orchestrator | 2026-04-13 01:27:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:27:59.811603 | orchestrator | 2026-04-13 01:27:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:27:59.813547 | orchestrator | 2026-04-13 01:27:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:27:59.813648 | orchestrator | 2026-04-13 01:27:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:28:02.864963 | orchestrator | 2026-04-13 01:28:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:28:02.866725 | orchestrator | 2026-04-13 01:28:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:28:02.866796 | orchestrator | 2026-04-13 01:28:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:28:05.913962 | orchestrator | 2026-04-13 01:28:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:28:05.915878 | orchestrator | 2026-04-13 01:28:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:28:05.916019 | orchestrator | 2026-04-13 01:28:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:28:08.963837 | orchestrator | 2026-04-13 01:28:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:28:08.965967 | orchestrator | 2026-04-13 01:28:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:28:08.966125 | orchestrator | 2026-04-13 01:28:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:28:12.027301 | orchestrator | 2026-04-13 01:28:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:28:12.029731 | orchestrator | 2026-04-13 01:28:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:28:12.029813 | orchestrator | 2026-04-13 01:28:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:28:15.080571 | orchestrator | 2026-04-13 01:28:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:28:15.081247 | orchestrator | 2026-04-13 01:28:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:28:15.081529 | orchestrator | 2026-04-13 01:28:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:28:18.133853 | orchestrator | 2026-04-13 01:28:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:28:18.134883 | orchestrator | 2026-04-13 01:28:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:28:18.135032 | orchestrator | 2026-04-13 01:28:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:28:21.182115 | orchestrator | 2026-04-13 01:28:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:28:21.183715 | orchestrator | 2026-04-13 01:28:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:28:21.183753 | orchestrator | 2026-04-13 01:28:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:28:24.233622 | orchestrator | 2026-04-13 01:28:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:28:24.236189 | orchestrator | 2026-04-13 01:28:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:28:24.238342 | orchestrator | 2026-04-13 01:28:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:28:27.285911 | orchestrator | 2026-04-13 01:28:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:28:27.286995 | orchestrator | 2026-04-13 01:28:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:28:27.287040 | orchestrator | 2026-04-13 01:28:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:28:30.331464 | orchestrator | 2026-04-13 01:28:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:28:30.332703 | orchestrator | 2026-04-13 01:28:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:28:30.332737 | orchestrator | 2026-04-13 01:28:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:28:33.383285 | orchestrator | 2026-04-13 01:28:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:28:33.384951 | orchestrator | 2026-04-13 01:28:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:28:33.385090 | orchestrator | 2026-04-13 01:28:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:28:36.437253 | orchestrator | 2026-04-13 01:28:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:28:36.441362 | orchestrator | 2026-04-13 01:28:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:28:36.441621 | orchestrator | 2026-04-13 01:28:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:28:39.498273 | orchestrator | 2026-04-13 01:28:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:28:39.501562 | orchestrator | 2026-04-13 01:28:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:28:39.501638 | orchestrator | 2026-04-13 01:28:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:28:42.560562 | orchestrator | 2026-04-13 01:28:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:28:42.562962 | orchestrator | 2026-04-13 01:28:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:28:42.563003 | orchestrator | 2026-04-13 01:28:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:28:45.615818 | orchestrator | 2026-04-13 01:28:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:28:45.619627 | orchestrator | 2026-04-13 01:28:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:28:45.619696 | orchestrator | 2026-04-13 01:28:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:28:48.674856 | orchestrator | 2026-04-13 01:28:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:28:48.676221 | orchestrator | 2026-04-13 01:28:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:28:48.676270 | orchestrator | 2026-04-13 01:28:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:28:51.734188 | orchestrator | 2026-04-13 01:28:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:28:51.735830 | orchestrator | 2026-04-13 01:28:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:28:51.735914 | orchestrator | 2026-04-13 01:28:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:28:54.782966 | orchestrator | 2026-04-13 01:28:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:28:54.788217 | orchestrator | 2026-04-13 01:28:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:28:54.788287 | orchestrator | 2026-04-13 01:28:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:28:57.837392 | orchestrator | 2026-04-13 01:28:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:28:57.838353 | orchestrator | 2026-04-13 01:28:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:28:57.838379 | orchestrator | 2026-04-13 01:28:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:29:00.891739 | orchestrator | 2026-04-13 01:29:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:29:00.892266 | orchestrator | 2026-04-13 01:29:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:29:00.892310 | orchestrator | 2026-04-13 01:29:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:29:03.949011 | orchestrator | 2026-04-13 01:29:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:29:03.949362 | orchestrator | 2026-04-13 01:29:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:29:03.949593 | orchestrator | 2026-04-13 01:29:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:29:07.003761 | orchestrator | 2026-04-13 01:29:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:29:07.005587 | orchestrator | 2026-04-13 01:29:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:29:07.005611 | orchestrator | 2026-04-13 01:29:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:29:10.059994 | orchestrator | 2026-04-13 01:29:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:29:10.063276 | orchestrator | 2026-04-13 01:29:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:29:10.063331 | orchestrator | 2026-04-13 01:29:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:29:13.115898 | orchestrator | 2026-04-13 01:29:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:29:13.118555 | orchestrator | 2026-04-13 01:29:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:29:13.118623 | orchestrator | 2026-04-13 01:29:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:29:16.169651 | orchestrator | 2026-04-13 01:29:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:29:16.172021 | orchestrator | 2026-04-13 01:29:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:29:16.172275 | orchestrator | 2026-04-13 01:29:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:29:19.234212 | orchestrator | 2026-04-13 01:29:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:29:19.240090 | orchestrator | 2026-04-13 01:29:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:29:19.240223 | orchestrator | 2026-04-13 01:29:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:29:22.292798 | orchestrator | 2026-04-13 01:29:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:29:22.297112 | orchestrator | 2026-04-13 01:29:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:29:22.297267 | orchestrator | 2026-04-13 01:29:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:29:25.351268 | orchestrator | 2026-04-13 01:29:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:29:25.353252 | orchestrator | 2026-04-13 01:29:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:29:25.353318 | orchestrator | 2026-04-13 01:29:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:29:28.402263 | orchestrator | 2026-04-13 01:29:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:29:28.403193 | orchestrator | 2026-04-13 01:29:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:29:28.403293 | orchestrator | 2026-04-13 01:29:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:29:31.455746 | orchestrator | 2026-04-13 01:29:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:29:31.457624 | orchestrator | 2026-04-13 01:29:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:29:31.457781 | orchestrator | 2026-04-13 01:29:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:29:34.505214 | orchestrator | 2026-04-13 01:29:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:29:34.507189 | orchestrator | 2026-04-13 01:29:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:29:34.507241 | orchestrator | 2026-04-13 01:29:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:29:37.556600 | orchestrator | 2026-04-13 01:29:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:29:37.558368 | orchestrator | 2026-04-13 01:29:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:29:37.558403 | orchestrator | 2026-04-13 01:29:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:29:40.607836 | orchestrator | 2026-04-13 01:29:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:29:40.609171 | orchestrator | 2026-04-13 01:29:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:29:40.609208 | orchestrator | 2026-04-13 01:29:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:29:43.675018 | orchestrator | 2026-04-13 01:29:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:29:43.677749 | orchestrator | 2026-04-13 01:29:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:29:43.677817 | orchestrator | 2026-04-13 01:29:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:29:46.729992 | orchestrator | 2026-04-13 01:29:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:29:46.731469 | orchestrator | 2026-04-13 01:29:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:29:46.731679 | orchestrator | 2026-04-13 01:29:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:29:49.775877 | orchestrator | 2026-04-13 01:29:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:29:49.777184 | orchestrator | 2026-04-13 01:29:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:29:49.777264 | orchestrator | 2026-04-13 01:29:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:29:52.825176 | orchestrator | 2026-04-13 01:29:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:29:52.826962 | orchestrator | 2026-04-13 01:29:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:29:52.827003 | orchestrator | 2026-04-13 01:29:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:29:55.877451 | orchestrator | 2026-04-13 01:29:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:29:55.878170 | orchestrator | 2026-04-13 01:29:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:29:55.878188 | orchestrator | 2026-04-13 01:29:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:29:58.933805 | orchestrator | 2026-04-13 01:29:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:29:58.935421 | orchestrator | 2026-04-13 01:29:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:29:58.935473 | orchestrator | 2026-04-13 01:29:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:30:01.985944 | orchestrator | 2026-04-13 01:30:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:30:01.988647 | orchestrator | 2026-04-13 01:30:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:30:01.988749 | orchestrator | 2026-04-13 01:30:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:30:05.045633 | orchestrator | 2026-04-13 01:30:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:30:05.046384 | orchestrator | 2026-04-13 01:30:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:30:05.046480 | orchestrator | 2026-04-13 01:30:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:30:08.098721 | orchestrator | 2026-04-13 01:30:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:30:08.101145 | orchestrator | 2026-04-13 01:30:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:30:08.101252 | orchestrator | 2026-04-13 01:30:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:30:11.148451 | orchestrator | 2026-04-13 01:30:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:30:11.149848 | orchestrator | 2026-04-13 01:30:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:30:11.149949 | orchestrator | 2026-04-13 01:30:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:30:14.208876 | orchestrator | 2026-04-13 01:30:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:30:14.213282 | orchestrator | 2026-04-13 01:30:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:30:14.213357 | orchestrator | 2026-04-13 01:30:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:30:17.266923 | orchestrator | 2026-04-13 01:30:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:30:17.268044 | orchestrator | 2026-04-13 01:30:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:30:17.268105 | orchestrator | 2026-04-13 01:30:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:30:20.323233 | orchestrator | 2026-04-13 01:30:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:30:20.326099 | orchestrator | 2026-04-13 01:30:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:30:20.327477 | orchestrator | 2026-04-13 01:30:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:30:23.377084 | orchestrator | 2026-04-13 01:30:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:30:23.380377 | orchestrator | 2026-04-13 01:30:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:30:23.380419 | orchestrator | 2026-04-13 01:30:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:30:26.432200 | orchestrator | 2026-04-13 01:30:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:30:26.433746 | orchestrator | 2026-04-13 01:30:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:30:26.433820 | orchestrator | 2026-04-13 01:30:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:30:29.481448 | orchestrator | 2026-04-13 01:30:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:30:29.482267 | orchestrator | 2026-04-13 01:30:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:30:29.482354 | orchestrator | 2026-04-13 01:30:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:30:32.533338 | orchestrator | 2026-04-13 01:30:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:30:32.534751 | orchestrator | 2026-04-13 01:30:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:30:32.534800 | orchestrator | 2026-04-13 01:30:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:30:35.587411 | orchestrator | 2026-04-13 01:30:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:30:35.588243 | orchestrator | 2026-04-13 01:30:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:30:35.588283 | orchestrator | 2026-04-13 01:30:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:30:38.645420 | orchestrator | 2026-04-13 01:30:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:30:38.650719 | orchestrator | 2026-04-13 01:30:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:30:38.650788 | orchestrator | 2026-04-13 01:30:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:30:41.696323 | orchestrator | 2026-04-13 01:30:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:30:41.696563 | orchestrator | 2026-04-13 01:30:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:30:41.696639 | orchestrator | 2026-04-13 01:30:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:30:44.743969 | orchestrator | 2026-04-13 01:30:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:30:44.745359 | orchestrator | 2026-04-13 01:30:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:30:44.745439 | orchestrator | 2026-04-13 01:30:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:30:47.794128 | orchestrator | 2026-04-13 01:30:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:30:47.794209 | orchestrator | 2026-04-13 01:30:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:30:47.794219 | orchestrator | 2026-04-13 01:30:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:30:50.845103 | orchestrator | 2026-04-13 01:30:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:30:50.847804 | orchestrator | 2026-04-13 01:30:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:30:50.847830 | orchestrator | 2026-04-13 01:30:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:30:53.894728 | orchestrator | 2026-04-13 01:30:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:30:53.896849 | orchestrator | 2026-04-13 01:30:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:30:53.896900 | orchestrator | 2026-04-13 01:30:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:30:56.946485 | orchestrator | 2026-04-13 01:30:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:30:56.949501 | orchestrator | 2026-04-13 01:30:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:30:56.949539 | orchestrator | 2026-04-13 01:30:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:30:59.999693 | orchestrator | 2026-04-13 01:31:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:31:00.001824 | orchestrator | 2026-04-13 01:31:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:31:00.001897 | orchestrator | 2026-04-13 01:31:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:31:03.052341 | orchestrator | 2026-04-13 01:31:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:31:03.055483 | orchestrator | 2026-04-13 01:31:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:31:03.055569 | orchestrator | 2026-04-13 01:31:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:31:06.101188 | orchestrator | 2026-04-13 01:31:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:31:06.104764 | orchestrator | 2026-04-13 01:31:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:31:06.104835 | orchestrator | 2026-04-13 01:31:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:31:09.159594 | orchestrator | 2026-04-13 01:31:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:31:09.161386 | orchestrator | 2026-04-13 01:31:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:31:09.161723 | orchestrator | 2026-04-13 01:31:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:31:12.210439 | orchestrator | 2026-04-13 01:31:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:31:12.211963 | orchestrator | 2026-04-13 01:31:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:31:12.212005 | orchestrator | 2026-04-13 01:31:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:31:15.262570 | orchestrator | 2026-04-13 01:31:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:31:15.264707 | orchestrator | 2026-04-13 01:31:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:31:15.264740 | orchestrator | 2026-04-13 01:31:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:31:18.310337 | orchestrator | 2026-04-13 01:31:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:31:18.312234 | orchestrator | 2026-04-13 01:31:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:31:18.313336 | orchestrator | 2026-04-13 01:31:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:31:21.370408 | orchestrator | 2026-04-13 01:31:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:31:21.372613 | orchestrator | 2026-04-13 01:31:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:31:21.372718 | orchestrator | 2026-04-13 01:31:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:31:24.425514 | orchestrator | 2026-04-13 01:31:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:31:24.428717 | orchestrator | 2026-04-13 01:31:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:31:24.428780 | orchestrator | 2026-04-13 01:31:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:31:27.484744 | orchestrator | 2026-04-13 01:31:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:31:27.485423 | orchestrator | 2026-04-13 01:31:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:31:27.485487 | orchestrator | 2026-04-13 01:31:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:31:30.539963 | orchestrator | 2026-04-13 01:31:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:31:30.541982 | orchestrator | 2026-04-13 01:31:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:31:30.542335 | orchestrator | 2026-04-13 01:31:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:31:33.588956 | orchestrator | 2026-04-13 01:31:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:31:33.590143 | orchestrator | 2026-04-13 01:31:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:31:33.590184 | orchestrator | 2026-04-13 01:31:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:31:36.636639 | orchestrator | 2026-04-13 01:31:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:31:36.638459 | orchestrator | 2026-04-13 01:31:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:31:36.638520 | orchestrator | 2026-04-13 01:31:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:31:39.702781 | orchestrator | 2026-04-13 01:31:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:31:39.704533 | orchestrator | 2026-04-13 01:31:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:31:39.704605 | orchestrator | 2026-04-13 01:31:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:31:42.757419 | orchestrator | 2026-04-13 01:31:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:31:42.758661 | orchestrator | 2026-04-13 01:31:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:31:42.758789 | orchestrator | 2026-04-13 01:31:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:31:45.812331 | orchestrator | 2026-04-13 01:31:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:31:45.814838 | orchestrator | 2026-04-13 01:31:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:31:45.814897 | orchestrator | 2026-04-13 01:31:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:31:48.864748 | orchestrator | 2026-04-13 01:31:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:31:48.866125 | orchestrator | 2026-04-13 01:31:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:31:48.866186 | orchestrator | 2026-04-13 01:31:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:31:51.924251 | orchestrator | 2026-04-13 01:31:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:31:51.925463 | orchestrator | 2026-04-13 01:31:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:31:51.925504 | orchestrator | 2026-04-13 01:31:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:31:54.984328 | orchestrator | 2026-04-13 01:31:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:31:54.986434 | orchestrator | 2026-04-13 01:31:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:31:54.986519 | orchestrator | 2026-04-13 01:31:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:31:58.038224 | orchestrator | 2026-04-13 01:31:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:31:58.041924 | orchestrator | 2026-04-13 01:31:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:31:58.042009 | orchestrator | 2026-04-13 01:31:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:32:01.094295 | orchestrator | 2026-04-13 01:32:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:32:01.096861 | orchestrator | 2026-04-13 01:32:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:32:01.096927 | orchestrator | 2026-04-13 01:32:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:32:04.141475 | orchestrator | 2026-04-13 01:32:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:32:04.143478 | orchestrator | 2026-04-13 01:32:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:32:04.143587 | orchestrator | 2026-04-13 01:32:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:32:07.197806 | orchestrator | 2026-04-13 01:32:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:32:07.199078 | orchestrator | 2026-04-13 01:32:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:32:07.199124 | orchestrator | 2026-04-13 01:32:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:32:10.261322 | orchestrator | 2026-04-13 01:32:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:32:10.265430 | orchestrator | 2026-04-13 01:32:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:32:10.265486 | orchestrator | 2026-04-13 01:32:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:32:13.318690 | orchestrator | 2026-04-13 01:32:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:32:13.319806 | orchestrator | 2026-04-13 01:32:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:32:13.319856 | orchestrator | 2026-04-13 01:32:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:32:16.377248 | orchestrator | 2026-04-13 01:32:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:32:16.379072 | orchestrator | 2026-04-13 01:32:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:32:16.379146 | orchestrator | 2026-04-13 01:32:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:32:19.423109 | orchestrator | 2026-04-13 01:32:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:32:19.424465 | orchestrator | 2026-04-13 01:32:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:32:19.424503 | orchestrator | 2026-04-13 01:32:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:32:22.482112 | orchestrator | 2026-04-13 01:32:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:32:22.483521 | orchestrator | 2026-04-13 01:32:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:32:22.483564 | orchestrator | 2026-04-13 01:32:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:32:25.539701 | orchestrator | 2026-04-13 01:32:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:32:25.540832 | orchestrator | 2026-04-13 01:32:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:32:25.540876 | orchestrator | 2026-04-13 01:32:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:32:28.589256 | orchestrator | 2026-04-13 01:32:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:32:28.591479 | orchestrator | 2026-04-13 01:32:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:32:28.591515 | orchestrator | 2026-04-13 01:32:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:32:31.650897 | orchestrator | 2026-04-13 01:32:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:32:31.652588 | orchestrator | 2026-04-13 01:32:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:32:31.652621 | orchestrator | 2026-04-13 01:32:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:32:34.704078 | orchestrator | 2026-04-13 01:32:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:32:34.706705 | orchestrator | 2026-04-13 01:32:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:32:34.707007 | orchestrator | 2026-04-13 01:32:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:32:37.756041 | orchestrator | 2026-04-13 01:32:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:32:37.757685 | orchestrator | 2026-04-13 01:32:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:32:37.757828 | orchestrator | 2026-04-13 01:32:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:32:40.806430 | orchestrator | 2026-04-13 01:32:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:32:40.807942 | orchestrator | 2026-04-13 01:32:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:32:40.808006 | orchestrator | 2026-04-13 01:32:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:32:43.859451 | orchestrator | 2026-04-13 01:32:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:32:43.860917 | orchestrator | 2026-04-13 01:32:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:32:43.860976 | orchestrator | 2026-04-13 01:32:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:32:46.916946 | orchestrator | 2026-04-13 01:32:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:32:46.918997 | orchestrator | 2026-04-13 01:32:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:32:46.919033 | orchestrator | 2026-04-13 01:32:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:32:49.968488 | orchestrator | 2026-04-13 01:32:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:32:49.969605 | orchestrator | 2026-04-13 01:32:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:32:49.969878 | orchestrator | 2026-04-13 01:32:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:32:53.025383 | orchestrator | 2026-04-13 01:32:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:32:53.028864 | orchestrator | 2026-04-13 01:32:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:32:53.028930 | orchestrator | 2026-04-13 01:32:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:32:56.084127 | orchestrator | 2026-04-13 01:32:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:32:56.085893 | orchestrator | 2026-04-13 01:32:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:32:56.086001 | orchestrator | 2026-04-13 01:32:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:32:59.144170 | orchestrator | 2026-04-13 01:32:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:32:59.145307 | orchestrator | 2026-04-13 01:32:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:32:59.145369 | orchestrator | 2026-04-13 01:32:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:33:02.197381 | orchestrator | 2026-04-13 01:33:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:33:02.200475 | orchestrator | 2026-04-13 01:33:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:33:02.200533 | orchestrator | 2026-04-13 01:33:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:33:05.255619 | orchestrator | 2026-04-13 01:33:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:33:05.259638 | orchestrator | 2026-04-13 01:33:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:33:05.259701 | orchestrator | 2026-04-13 01:33:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:33:08.315433 | orchestrator | 2026-04-13 01:33:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:33:08.316894 | orchestrator | 2026-04-13 01:33:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:33:08.316945 | orchestrator | 2026-04-13 01:33:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:33:11.370559 | orchestrator | 2026-04-13 01:33:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:33:11.372117 | orchestrator | 2026-04-13 01:33:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:33:11.372185 | orchestrator | 2026-04-13 01:33:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:33:14.432638 | orchestrator | 2026-04-13 01:33:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:33:14.434608 | orchestrator | 2026-04-13 01:33:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:33:14.434684 | orchestrator | 2026-04-13 01:33:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:33:17.486213 | orchestrator | 2026-04-13 01:33:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:33:17.489113 | orchestrator | 2026-04-13 01:33:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:33:17.489161 | orchestrator | 2026-04-13 01:33:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:33:20.539766 | orchestrator | 2026-04-13 01:33:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:33:20.542412 | orchestrator | 2026-04-13 01:33:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:33:20.542855 | orchestrator | 2026-04-13 01:33:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:33:23.593248 | orchestrator | 2026-04-13 01:33:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:33:23.594107 | orchestrator | 2026-04-13 01:33:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:33:23.594157 | orchestrator | 2026-04-13 01:33:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:33:26.643937 | orchestrator | 2026-04-13 01:33:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:33:26.644671 | orchestrator | 2026-04-13 01:33:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:33:26.644706 | orchestrator | 2026-04-13 01:33:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:33:29.701428 | orchestrator | 2026-04-13 01:33:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:33:29.702975 | orchestrator | 2026-04-13 01:33:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:33:29.703442 | orchestrator | 2026-04-13 01:33:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:33:32.752899 | orchestrator | 2026-04-13 01:33:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:33:32.754917 | orchestrator | 2026-04-13 01:33:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:33:32.754976 | orchestrator | 2026-04-13 01:33:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:33:35.807733 | orchestrator | 2026-04-13 01:33:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:33:35.809419 | orchestrator | 2026-04-13 01:33:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:33:35.809475 | orchestrator | 2026-04-13 01:33:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:33:38.860551 | orchestrator | 2026-04-13 01:33:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:33:38.861779 | orchestrator | 2026-04-13 01:33:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:33:38.861815 | orchestrator | 2026-04-13 01:33:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:33:41.915311 | orchestrator | 2026-04-13 01:33:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:33:41.917418 | orchestrator | 2026-04-13 01:33:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:33:41.917478 | orchestrator | 2026-04-13 01:33:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:33:44.970339 | orchestrator | 2026-04-13 01:33:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:33:44.971398 | orchestrator | 2026-04-13 01:33:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:33:44.971435 | orchestrator | 2026-04-13 01:33:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:33:48.022439 | orchestrator | 2026-04-13 01:33:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:33:48.023445 | orchestrator | 2026-04-13 01:33:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:33:48.023486 | orchestrator | 2026-04-13 01:33:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:33:51.077300 | orchestrator | 2026-04-13 01:33:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:33:51.078323 | orchestrator | 2026-04-13 01:33:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:33:51.078405 | orchestrator | 2026-04-13 01:33:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:33:54.129135 | orchestrator | 2026-04-13 01:33:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:33:54.130555 | orchestrator | 2026-04-13 01:33:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:33:54.130657 | orchestrator | 2026-04-13 01:33:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:33:57.184059 | orchestrator | 2026-04-13 01:33:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:33:57.185037 | orchestrator | 2026-04-13 01:33:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:33:57.185072 | orchestrator | 2026-04-13 01:33:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:34:00.229424 | orchestrator | 2026-04-13 01:34:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:34:00.230841 | orchestrator | 2026-04-13 01:34:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:34:00.231547 | orchestrator | 2026-04-13 01:34:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:34:03.277398 | orchestrator | 2026-04-13 01:34:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:34:03.278951 | orchestrator | 2026-04-13 01:34:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:34:03.278998 | orchestrator | 2026-04-13 01:34:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:34:06.331577 | orchestrator | 2026-04-13 01:34:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:34:06.334242 | orchestrator | 2026-04-13 01:34:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:34:06.334320 | orchestrator | 2026-04-13 01:34:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:34:09.383725 | orchestrator | 2026-04-13 01:34:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:34:09.384129 | orchestrator | 2026-04-13 01:34:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:34:09.384147 | orchestrator | 2026-04-13 01:34:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:34:12.439518 | orchestrator | 2026-04-13 01:34:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:34:12.441667 | orchestrator | 2026-04-13 01:34:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:34:12.441721 | orchestrator | 2026-04-13 01:34:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:34:15.495000 | orchestrator | 2026-04-13 01:34:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:34:15.496548 | orchestrator | 2026-04-13 01:34:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:34:15.496591 | orchestrator | 2026-04-13 01:34:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:34:18.549973 | orchestrator | 2026-04-13 01:34:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:34:18.552007 | orchestrator | 2026-04-13 01:34:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:34:18.552539 | orchestrator | 2026-04-13 01:34:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:34:21.612809 | orchestrator | 2026-04-13 01:34:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:34:21.614966 | orchestrator | 2026-04-13 01:34:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:34:21.615072 | orchestrator | 2026-04-13 01:34:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:34:24.672158 | orchestrator | 2026-04-13 01:34:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:34:24.673435 | orchestrator | 2026-04-13 01:34:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:34:24.673568 | orchestrator | 2026-04-13 01:34:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:34:27.724104 | orchestrator | 2026-04-13 01:34:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:34:27.725460 | orchestrator | 2026-04-13 01:34:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:34:27.725507 | orchestrator | 2026-04-13 01:34:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:34:30.773637 | orchestrator | 2026-04-13 01:34:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:34:30.775341 | orchestrator | 2026-04-13 01:34:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:34:30.775413 | orchestrator | 2026-04-13 01:34:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:34:33.824039 | orchestrator | 2026-04-13 01:34:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:34:33.826288 | orchestrator | 2026-04-13 01:34:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:34:33.826355 | orchestrator | 2026-04-13 01:34:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:34:36.875112 | orchestrator | 2026-04-13 01:34:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:34:36.876879 | orchestrator | 2026-04-13 01:34:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:34:36.876949 | orchestrator | 2026-04-13 01:34:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:34:39.933237 | orchestrator | 2026-04-13 01:34:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:34:39.935297 | orchestrator | 2026-04-13 01:34:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:34:39.935378 | orchestrator | 2026-04-13 01:34:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:34:42.984134 | orchestrator | 2026-04-13 01:34:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:34:42.985060 | orchestrator | 2026-04-13 01:34:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:34:42.985099 | orchestrator | 2026-04-13 01:34:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:34:46.042427 | orchestrator | 2026-04-13 01:34:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:34:46.043948 | orchestrator | 2026-04-13 01:34:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:34:46.043991 | orchestrator | 2026-04-13 01:34:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:34:49.095198 | orchestrator | 2026-04-13 01:34:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:34:49.097010 | orchestrator | 2026-04-13 01:34:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:34:49.097051 | orchestrator | 2026-04-13 01:34:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:34:52.149987 | orchestrator | 2026-04-13 01:34:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:34:52.152158 | orchestrator | 2026-04-13 01:34:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:34:52.152208 | orchestrator | 2026-04-13 01:34:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:34:55.199833 | orchestrator | 2026-04-13 01:34:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:34:55.204995 | orchestrator | 2026-04-13 01:34:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:34:55.205041 | orchestrator | 2026-04-13 01:34:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:34:58.257059 | orchestrator | 2026-04-13 01:34:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:34:58.259157 | orchestrator | 2026-04-13 01:34:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:34:58.259199 | orchestrator | 2026-04-13 01:34:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:35:01.313457 | orchestrator | 2026-04-13 01:35:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:35:01.315725 | orchestrator | 2026-04-13 01:35:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:35:01.315791 | orchestrator | 2026-04-13 01:35:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:35:04.368352 | orchestrator | 2026-04-13 01:35:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:35:04.370545 | orchestrator | 2026-04-13 01:35:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:35:04.370598 | orchestrator | 2026-04-13 01:35:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:35:07.423295 | orchestrator | 2026-04-13 01:35:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:35:07.424530 | orchestrator | 2026-04-13 01:35:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:35:07.424588 | orchestrator | 2026-04-13 01:35:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:35:10.476893 | orchestrator | 2026-04-13 01:35:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:35:10.478222 | orchestrator | 2026-04-13 01:35:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:35:10.478278 | orchestrator | 2026-04-13 01:35:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:35:13.530418 | orchestrator | 2026-04-13 01:35:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:35:13.531578 | orchestrator | 2026-04-13 01:35:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:35:13.531606 | orchestrator | 2026-04-13 01:35:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:35:16.586241 | orchestrator | 2026-04-13 01:35:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:35:16.588845 | orchestrator | 2026-04-13 01:35:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:35:16.588875 | orchestrator | 2026-04-13 01:35:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:35:19.634460 | orchestrator | 2026-04-13 01:35:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:35:19.637404 | orchestrator | 2026-04-13 01:35:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:35:19.637490 | orchestrator | 2026-04-13 01:35:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:35:22.680883 | orchestrator | 2026-04-13 01:35:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:35:22.685402 | orchestrator | 2026-04-13 01:35:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:35:22.685466 | orchestrator | 2026-04-13 01:35:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:35:25.736446 | orchestrator | 2026-04-13 01:35:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:35:25.738154 | orchestrator | 2026-04-13 01:35:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:35:25.738198 | orchestrator | 2026-04-13 01:35:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:35:28.786424 | orchestrator | 2026-04-13 01:35:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:35:28.787633 | orchestrator | 2026-04-13 01:35:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:35:28.787691 | orchestrator | 2026-04-13 01:35:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:35:31.844106 | orchestrator | 2026-04-13 01:35:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:35:31.846746 | orchestrator | 2026-04-13 01:35:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:35:31.846892 | orchestrator | 2026-04-13 01:35:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:35:34.901816 | orchestrator | 2026-04-13 01:35:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:35:34.904611 | orchestrator | 2026-04-13 01:35:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:35:34.904686 | orchestrator | 2026-04-13 01:35:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:35:37.954615 | orchestrator | 2026-04-13 01:35:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:35:37.955973 | orchestrator | 2026-04-13 01:35:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:35:37.956019 | orchestrator | 2026-04-13 01:35:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:35:41.013565 | orchestrator | 2026-04-13 01:35:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:35:41.015846 | orchestrator | 2026-04-13 01:35:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:35:41.015897 | orchestrator | 2026-04-13 01:35:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:35:44.069751 | orchestrator | 2026-04-13 01:35:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:35:44.072303 | orchestrator | 2026-04-13 01:35:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:35:44.072397 | orchestrator | 2026-04-13 01:35:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:35:47.123095 | orchestrator | 2026-04-13 01:35:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:35:47.125946 | orchestrator | 2026-04-13 01:35:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:35:47.126108 | orchestrator | 2026-04-13 01:35:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:35:50.177091 | orchestrator | 2026-04-13 01:35:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:35:50.178584 | orchestrator | 2026-04-13 01:35:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:35:50.178654 | orchestrator | 2026-04-13 01:35:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:35:53.226804 | orchestrator | 2026-04-13 01:35:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:35:53.227417 | orchestrator | 2026-04-13 01:35:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:35:53.227492 | orchestrator | 2026-04-13 01:35:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:35:56.275107 | orchestrator | 2026-04-13 01:35:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:35:56.276509 | orchestrator | 2026-04-13 01:35:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:35:56.276568 | orchestrator | 2026-04-13 01:35:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:35:59.327312 | orchestrator | 2026-04-13 01:35:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:35:59.329821 | orchestrator | 2026-04-13 01:35:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:35:59.329862 | orchestrator | 2026-04-13 01:35:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:36:02.377280 | orchestrator | 2026-04-13 01:36:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:36:02.379174 | orchestrator | 2026-04-13 01:36:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:36:02.380027 | orchestrator | 2026-04-13 01:36:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:36:05.432925 | orchestrator | 2026-04-13 01:36:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:36:05.435003 | orchestrator | 2026-04-13 01:36:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:36:05.435056 | orchestrator | 2026-04-13 01:36:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:36:08.489726 | orchestrator | 2026-04-13 01:36:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:36:08.490940 | orchestrator | 2026-04-13 01:36:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:36:08.491015 | orchestrator | 2026-04-13 01:36:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:36:11.541276 | orchestrator | 2026-04-13 01:36:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:36:11.542545 | orchestrator | 2026-04-13 01:36:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:36:11.542582 | orchestrator | 2026-04-13 01:36:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:36:14.588604 | orchestrator | 2026-04-13 01:36:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:36:14.590076 | orchestrator | 2026-04-13 01:36:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:36:14.590124 | orchestrator | 2026-04-13 01:36:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:36:17.640476 | orchestrator | 2026-04-13 01:36:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:36:17.641940 | orchestrator | 2026-04-13 01:36:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:36:17.642156 | orchestrator | 2026-04-13 01:36:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:36:20.687897 | orchestrator | 2026-04-13 01:36:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:36:20.690236 | orchestrator | 2026-04-13 01:36:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:36:20.690314 | orchestrator | 2026-04-13 01:36:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:36:23.736692 | orchestrator | 2026-04-13 01:36:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:36:23.738545 | orchestrator | 2026-04-13 01:36:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:36:23.738741 | orchestrator | 2026-04-13 01:36:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:36:26.788638 | orchestrator | 2026-04-13 01:36:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:36:26.789484 | orchestrator | 2026-04-13 01:36:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:36:26.789510 | orchestrator | 2026-04-13 01:36:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:36:29.845515 | orchestrator | 2026-04-13 01:36:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:36:29.849741 | orchestrator | 2026-04-13 01:36:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:36:29.849831 | orchestrator | 2026-04-13 01:36:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:36:32.906498 | orchestrator | 2026-04-13 01:36:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:36:32.908477 | orchestrator | 2026-04-13 01:36:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:36:32.908521 | orchestrator | 2026-04-13 01:36:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:36:35.962842 | orchestrator | 2026-04-13 01:36:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:36:35.964630 | orchestrator | 2026-04-13 01:36:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:36:35.964681 | orchestrator | 2026-04-13 01:36:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:36:39.018185 | orchestrator | 2026-04-13 01:36:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:36:39.022229 | orchestrator | 2026-04-13 01:36:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:36:39.022298 | orchestrator | 2026-04-13 01:36:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:36:42.073372 | orchestrator | 2026-04-13 01:36:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:36:42.075273 | orchestrator | 2026-04-13 01:36:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:36:42.075336 | orchestrator | 2026-04-13 01:36:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:36:45.119328 | orchestrator | 2026-04-13 01:36:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:36:45.122494 | orchestrator | 2026-04-13 01:36:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:36:45.122564 | orchestrator | 2026-04-13 01:36:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:36:48.178500 | orchestrator | 2026-04-13 01:36:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:36:48.178592 | orchestrator | 2026-04-13 01:36:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:36:48.178606 | orchestrator | 2026-04-13 01:36:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:36:51.224202 | orchestrator | 2026-04-13 01:36:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:36:51.226929 | orchestrator | 2026-04-13 01:36:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:36:51.227062 | orchestrator | 2026-04-13 01:36:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:36:54.277906 | orchestrator | 2026-04-13 01:36:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:36:54.282209 | orchestrator | 2026-04-13 01:36:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:36:54.282292 | orchestrator | 2026-04-13 01:36:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:36:57.322370 | orchestrator | 2026-04-13 01:36:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:36:57.322619 | orchestrator | 2026-04-13 01:36:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:36:57.322658 | orchestrator | 2026-04-13 01:36:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:37:00.378002 | orchestrator | 2026-04-13 01:37:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:37:00.379995 | orchestrator | 2026-04-13 01:37:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:37:00.380112 | orchestrator | 2026-04-13 01:37:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:37:03.429157 | orchestrator | 2026-04-13 01:37:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:37:03.430787 | orchestrator | 2026-04-13 01:37:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:37:03.430842 | orchestrator | 2026-04-13 01:37:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:37:06.487666 | orchestrator | 2026-04-13 01:37:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:37:06.489722 | orchestrator | 2026-04-13 01:37:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:37:06.489894 | orchestrator | 2026-04-13 01:37:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:37:09.546951 | orchestrator | 2026-04-13 01:37:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:37:09.548233 | orchestrator | 2026-04-13 01:37:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:37:09.548449 | orchestrator | 2026-04-13 01:37:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:37:12.595405 | orchestrator | 2026-04-13 01:37:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:37:12.597473 | orchestrator | 2026-04-13 01:37:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:37:12.597661 | orchestrator | 2026-04-13 01:37:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:37:15.648171 | orchestrator | 2026-04-13 01:37:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:37:15.650825 | orchestrator | 2026-04-13 01:37:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:37:15.651048 | orchestrator | 2026-04-13 01:37:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:37:18.702565 | orchestrator | 2026-04-13 01:37:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:37:18.706676 | orchestrator | 2026-04-13 01:37:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:37:18.706781 | orchestrator | 2026-04-13 01:37:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:37:21.760545 | orchestrator | 2026-04-13 01:37:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:37:21.762206 | orchestrator | 2026-04-13 01:37:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:37:21.762472 | orchestrator | 2026-04-13 01:37:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:37:24.815841 | orchestrator | 2026-04-13 01:37:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:37:24.816830 | orchestrator | 2026-04-13 01:37:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:37:24.816871 | orchestrator | 2026-04-13 01:37:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:37:27.876540 | orchestrator | 2026-04-13 01:37:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:37:27.877832 | orchestrator | 2026-04-13 01:37:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:37:27.877974 | orchestrator | 2026-04-13 01:37:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:37:30.928431 | orchestrator | 2026-04-13 01:37:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:37:30.929182 | orchestrator | 2026-04-13 01:37:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:37:30.929246 | orchestrator | 2026-04-13 01:37:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:37:33.981303 | orchestrator | 2026-04-13 01:37:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:37:33.982560 | orchestrator | 2026-04-13 01:37:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:37:33.982594 | orchestrator | 2026-04-13 01:37:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:37:37.034958 | orchestrator | 2026-04-13 01:37:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:37:37.036733 | orchestrator | 2026-04-13 01:37:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:37:37.037019 | orchestrator | 2026-04-13 01:37:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:37:40.082979 | orchestrator | 2026-04-13 01:37:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:37:40.083955 | orchestrator | 2026-04-13 01:37:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:37:40.084138 | orchestrator | 2026-04-13 01:37:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:37:43.134753 | orchestrator | 2026-04-13 01:37:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:37:43.136699 | orchestrator | 2026-04-13 01:37:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:37:43.136757 | orchestrator | 2026-04-13 01:37:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:37:46.188747 | orchestrator | 2026-04-13 01:37:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:37:46.188931 | orchestrator | 2026-04-13 01:37:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:37:46.188952 | orchestrator | 2026-04-13 01:37:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:37:49.226987 | orchestrator | 2026-04-13 01:37:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:37:49.228627 | orchestrator | 2026-04-13 01:37:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:37:49.228663 | orchestrator | 2026-04-13 01:37:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:37:52.291671 | orchestrator | 2026-04-13 01:37:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:37:52.292638 | orchestrator | 2026-04-13 01:37:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:37:52.292670 | orchestrator | 2026-04-13 01:37:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:37:55.336031 | orchestrator | 2026-04-13 01:37:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:37:55.337944 | orchestrator | 2026-04-13 01:37:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:37:55.337982 | orchestrator | 2026-04-13 01:37:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:37:58.395038 | orchestrator | 2026-04-13 01:37:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:37:58.395669 | orchestrator | 2026-04-13 01:37:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:37:58.395809 | orchestrator | 2026-04-13 01:37:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:38:01.451793 | orchestrator | 2026-04-13 01:38:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:38:01.454785 | orchestrator | 2026-04-13 01:38:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:38:01.454835 | orchestrator | 2026-04-13 01:38:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:38:04.507925 | orchestrator | 2026-04-13 01:38:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:38:04.509982 | orchestrator | 2026-04-13 01:38:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:38:04.510116 | orchestrator | 2026-04-13 01:38:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:38:07.562600 | orchestrator | 2026-04-13 01:38:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:38:07.565708 | orchestrator | 2026-04-13 01:38:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:38:07.566270 | orchestrator | 2026-04-13 01:38:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:38:10.615776 | orchestrator | 2026-04-13 01:38:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:38:10.617985 | orchestrator | 2026-04-13 01:38:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:38:10.618146 | orchestrator | 2026-04-13 01:38:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:38:13.668491 | orchestrator | 2026-04-13 01:38:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:38:13.669300 | orchestrator | 2026-04-13 01:38:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:38:13.669324 | orchestrator | 2026-04-13 01:38:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:38:16.723614 | orchestrator | 2026-04-13 01:38:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:38:16.725190 | orchestrator | 2026-04-13 01:38:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:38:16.725233 | orchestrator | 2026-04-13 01:38:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:38:19.792239 | orchestrator | 2026-04-13 01:38:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:38:19.792804 | orchestrator | 2026-04-13 01:38:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:38:19.793247 | orchestrator | 2026-04-13 01:38:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:38:22.847669 | orchestrator | 2026-04-13 01:38:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:38:22.850373 | orchestrator | 2026-04-13 01:38:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:38:22.850414 | orchestrator | 2026-04-13 01:38:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:38:25.903262 | orchestrator | 2026-04-13 01:38:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:38:25.904921 | orchestrator | 2026-04-13 01:38:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:38:25.905017 | orchestrator | 2026-04-13 01:38:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:38:28.956194 | orchestrator | 2026-04-13 01:38:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:38:28.957769 | orchestrator | 2026-04-13 01:38:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:38:28.957798 | orchestrator | 2026-04-13 01:38:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:38:32.016159 | orchestrator | 2026-04-13 01:38:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:38:32.017653 | orchestrator | 2026-04-13 01:38:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:38:32.017680 | orchestrator | 2026-04-13 01:38:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:38:35.067883 | orchestrator | 2026-04-13 01:38:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:38:35.070272 | orchestrator | 2026-04-13 01:38:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:38:35.070464 | orchestrator | 2026-04-13 01:38:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:38:38.118483 | orchestrator | 2026-04-13 01:38:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:38:38.121637 | orchestrator | 2026-04-13 01:38:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:38:38.121715 | orchestrator | 2026-04-13 01:38:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:38:41.172602 | orchestrator | 2026-04-13 01:38:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:38:41.173485 | orchestrator | 2026-04-13 01:38:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:38:41.173530 | orchestrator | 2026-04-13 01:38:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:38:44.222143 | orchestrator | 2026-04-13 01:38:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:38:44.224385 | orchestrator | 2026-04-13 01:38:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:38:44.224643 | orchestrator | 2026-04-13 01:38:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:38:47.268872 | orchestrator | 2026-04-13 01:38:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:38:47.269594 | orchestrator | 2026-04-13 01:38:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:38:47.269630 | orchestrator | 2026-04-13 01:38:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:38:50.322460 | orchestrator | 2026-04-13 01:38:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:38:50.324638 | orchestrator | 2026-04-13 01:38:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:38:50.324836 | orchestrator | 2026-04-13 01:38:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:38:53.383416 | orchestrator | 2026-04-13 01:38:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:38:53.384882 | orchestrator | 2026-04-13 01:38:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:38:53.384934 | orchestrator | 2026-04-13 01:38:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:38:56.441372 | orchestrator | 2026-04-13 01:38:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:38:56.443332 | orchestrator | 2026-04-13 01:38:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:38:56.443493 | orchestrator | 2026-04-13 01:38:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:38:59.490984 | orchestrator | 2026-04-13 01:38:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:38:59.493669 | orchestrator | 2026-04-13 01:38:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:38:59.493709 | orchestrator | 2026-04-13 01:38:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:39:02.539313 | orchestrator | 2026-04-13 01:39:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:39:02.540720 | orchestrator | 2026-04-13 01:39:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:39:02.540762 | orchestrator | 2026-04-13 01:39:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:39:05.586374 | orchestrator | 2026-04-13 01:39:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:39:05.587929 | orchestrator | 2026-04-13 01:39:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:39:05.588002 | orchestrator | 2026-04-13 01:39:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:39:08.656076 | orchestrator | 2026-04-13 01:39:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:39:08.658569 | orchestrator | 2026-04-13 01:39:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:39:08.658705 | orchestrator | 2026-04-13 01:39:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:39:11.713717 | orchestrator | 2026-04-13 01:39:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:39:11.714882 | orchestrator | 2026-04-13 01:39:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:39:11.714918 | orchestrator | 2026-04-13 01:39:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:39:14.756840 | orchestrator | 2026-04-13 01:39:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:39:14.758448 | orchestrator | 2026-04-13 01:39:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:39:14.758496 | orchestrator | 2026-04-13 01:39:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:39:17.811977 | orchestrator | 2026-04-13 01:39:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:39:17.814857 | orchestrator | 2026-04-13 01:39:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:39:17.814938 | orchestrator | 2026-04-13 01:39:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:39:20.867524 | orchestrator | 2026-04-13 01:39:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:39:20.868800 | orchestrator | 2026-04-13 01:39:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:39:20.868815 | orchestrator | 2026-04-13 01:39:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:39:23.933396 | orchestrator | 2026-04-13 01:39:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:39:23.935882 | orchestrator | 2026-04-13 01:39:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:39:23.936010 | orchestrator | 2026-04-13 01:39:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:39:26.989061 | orchestrator | 2026-04-13 01:39:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:39:26.990588 | orchestrator | 2026-04-13 01:39:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:39:26.990623 | orchestrator | 2026-04-13 01:39:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:39:30.046163 | orchestrator | 2026-04-13 01:39:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:39:30.047007 | orchestrator | 2026-04-13 01:39:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:39:30.047062 | orchestrator | 2026-04-13 01:39:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:39:33.101161 | orchestrator | 2026-04-13 01:39:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:39:33.102620 | orchestrator | 2026-04-13 01:39:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:39:33.102683 | orchestrator | 2026-04-13 01:39:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:39:36.160350 | orchestrator | 2026-04-13 01:39:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:39:36.162705 | orchestrator | 2026-04-13 01:39:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:39:36.162753 | orchestrator | 2026-04-13 01:39:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:39:39.224330 | orchestrator | 2026-04-13 01:39:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:39:39.225310 | orchestrator | 2026-04-13 01:39:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:39:39.225347 | orchestrator | 2026-04-13 01:39:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:39:42.277928 | orchestrator | 2026-04-13 01:39:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:39:42.280248 | orchestrator | 2026-04-13 01:39:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:39:42.280335 | orchestrator | 2026-04-13 01:39:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:39:45.335412 | orchestrator | 2026-04-13 01:39:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:39:45.338441 | orchestrator | 2026-04-13 01:39:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:39:45.338492 | orchestrator | 2026-04-13 01:39:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:39:48.393903 | orchestrator | 2026-04-13 01:39:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:39:48.396787 | orchestrator | 2026-04-13 01:39:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:39:48.396991 | orchestrator | 2026-04-13 01:39:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:39:51.442864 | orchestrator | 2026-04-13 01:39:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:39:51.445688 | orchestrator | 2026-04-13 01:39:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:39:51.445730 | orchestrator | 2026-04-13 01:39:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:39:54.508596 | orchestrator | 2026-04-13 01:39:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:39:54.510411 | orchestrator | 2026-04-13 01:39:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:39:54.510440 | orchestrator | 2026-04-13 01:39:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:39:57.559000 | orchestrator | 2026-04-13 01:39:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:39:57.560600 | orchestrator | 2026-04-13 01:39:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:39:57.560704 | orchestrator | 2026-04-13 01:39:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:40:00.609101 | orchestrator | 2026-04-13 01:40:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:40:00.610461 | orchestrator | 2026-04-13 01:40:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:40:00.610549 | orchestrator | 2026-04-13 01:40:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:40:03.664754 | orchestrator | 2026-04-13 01:40:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:40:03.666410 | orchestrator | 2026-04-13 01:40:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:40:03.666456 | orchestrator | 2026-04-13 01:40:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:40:06.715189 | orchestrator | 2026-04-13 01:40:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:40:06.716876 | orchestrator | 2026-04-13 01:40:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:40:06.716907 | orchestrator | 2026-04-13 01:40:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:40:09.762798 | orchestrator | 2026-04-13 01:40:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:40:09.764749 | orchestrator | 2026-04-13 01:40:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:40:09.764783 | orchestrator | 2026-04-13 01:40:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:40:12.813886 | orchestrator | 2026-04-13 01:40:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:40:12.815800 | orchestrator | 2026-04-13 01:40:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:40:12.815880 | orchestrator | 2026-04-13 01:40:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:40:15.865695 | orchestrator | 2026-04-13 01:40:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:40:15.867976 | orchestrator | 2026-04-13 01:40:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:40:15.868646 | orchestrator | 2026-04-13 01:40:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:40:18.919383 | orchestrator | 2026-04-13 01:40:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:40:18.922464 | orchestrator | 2026-04-13 01:40:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:40:18.922563 | orchestrator | 2026-04-13 01:40:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:40:21.973867 | orchestrator | 2026-04-13 01:40:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:40:21.975414 | orchestrator | 2026-04-13 01:40:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:40:21.975460 | orchestrator | 2026-04-13 01:40:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:40:25.034856 | orchestrator | 2026-04-13 01:40:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:40:25.037311 | orchestrator | 2026-04-13 01:40:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:40:25.037378 | orchestrator | 2026-04-13 01:40:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:40:28.091171 | orchestrator | 2026-04-13 01:40:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:40:28.092564 | orchestrator | 2026-04-13 01:40:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:40:28.092595 | orchestrator | 2026-04-13 01:40:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:40:31.145453 | orchestrator | 2026-04-13 01:40:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:40:31.147531 | orchestrator | 2026-04-13 01:40:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:40:31.147619 | orchestrator | 2026-04-13 01:40:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:40:34.193637 | orchestrator | 2026-04-13 01:40:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:40:34.195198 | orchestrator | 2026-04-13 01:40:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:40:34.195262 | orchestrator | 2026-04-13 01:40:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:40:37.246506 | orchestrator | 2026-04-13 01:40:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:40:37.247467 | orchestrator | 2026-04-13 01:40:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:40:37.247511 | orchestrator | 2026-04-13 01:40:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:40:40.299700 | orchestrator | 2026-04-13 01:40:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:40:40.300737 | orchestrator | 2026-04-13 01:40:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:40:40.300779 | orchestrator | 2026-04-13 01:40:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:40:43.350817 | orchestrator | 2026-04-13 01:40:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:40:43.354538 | orchestrator | 2026-04-13 01:40:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:40:43.354612 | orchestrator | 2026-04-13 01:40:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:40:46.408461 | orchestrator | 2026-04-13 01:40:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:40:46.409747 | orchestrator | 2026-04-13 01:40:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:40:46.409786 | orchestrator | 2026-04-13 01:40:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:40:49.460182 | orchestrator | 2026-04-13 01:40:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:40:49.461496 | orchestrator | 2026-04-13 01:40:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:40:49.461600 | orchestrator | 2026-04-13 01:40:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:40:52.510114 | orchestrator | 2026-04-13 01:40:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:40:52.510658 | orchestrator | 2026-04-13 01:40:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:40:52.510694 | orchestrator | 2026-04-13 01:40:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:40:55.556886 | orchestrator | 2026-04-13 01:40:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:40:55.559473 | orchestrator | 2026-04-13 01:40:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:40:55.559511 | orchestrator | 2026-04-13 01:40:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:40:58.607177 | orchestrator | 2026-04-13 01:40:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:40:58.609889 | orchestrator | 2026-04-13 01:40:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:40:58.609911 | orchestrator | 2026-04-13 01:40:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:41:01.667238 | orchestrator | 2026-04-13 01:41:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:41:01.668915 | orchestrator | 2026-04-13 01:41:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:41:01.668941 | orchestrator | 2026-04-13 01:41:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:41:04.727388 | orchestrator | 2026-04-13 01:41:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:41:04.729142 | orchestrator | 2026-04-13 01:41:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:41:04.729295 | orchestrator | 2026-04-13 01:41:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:41:07.784507 | orchestrator | 2026-04-13 01:41:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:41:07.786216 | orchestrator | 2026-04-13 01:41:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:41:07.786257 | orchestrator | 2026-04-13 01:41:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:41:10.840817 | orchestrator | 2026-04-13 01:41:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:41:10.844930 | orchestrator | 2026-04-13 01:41:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:41:10.844969 | orchestrator | 2026-04-13 01:41:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:41:13.892865 | orchestrator | 2026-04-13 01:41:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:41:13.893309 | orchestrator | 2026-04-13 01:41:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:41:13.893353 | orchestrator | 2026-04-13 01:41:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:41:16.941746 | orchestrator | 2026-04-13 01:41:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:41:16.942838 | orchestrator | 2026-04-13 01:41:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:41:16.942874 | orchestrator | 2026-04-13 01:41:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:41:19.993490 | orchestrator | 2026-04-13 01:41:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:41:19.995294 | orchestrator | 2026-04-13 01:41:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:41:19.995390 | orchestrator | 2026-04-13 01:41:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:41:23.059989 | orchestrator | 2026-04-13 01:41:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:41:23.061241 | orchestrator | 2026-04-13 01:41:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:41:23.061279 | orchestrator | 2026-04-13 01:41:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:41:26.112906 | orchestrator | 2026-04-13 01:41:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:41:26.115873 | orchestrator | 2026-04-13 01:41:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:41:26.115945 | orchestrator | 2026-04-13 01:41:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:41:29.168068 | orchestrator | 2026-04-13 01:41:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:41:29.169316 | orchestrator | 2026-04-13 01:41:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:41:29.169336 | orchestrator | 2026-04-13 01:41:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:41:32.228118 | orchestrator | 2026-04-13 01:41:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:41:32.229579 | orchestrator | 2026-04-13 01:41:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:41:32.229612 | orchestrator | 2026-04-13 01:41:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:41:35.287393 | orchestrator | 2026-04-13 01:41:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:41:35.289715 | orchestrator | 2026-04-13 01:41:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:41:35.289821 | orchestrator | 2026-04-13 01:41:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:41:38.347746 | orchestrator | 2026-04-13 01:41:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:41:38.349312 | orchestrator | 2026-04-13 01:41:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:41:38.349341 | orchestrator | 2026-04-13 01:41:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:41:41.398919 | orchestrator | 2026-04-13 01:41:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:41:41.400960 | orchestrator | 2026-04-13 01:41:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:41:41.401265 | orchestrator | 2026-04-13 01:41:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:41:44.456523 | orchestrator | 2026-04-13 01:41:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:41:44.459786 | orchestrator | 2026-04-13 01:41:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:41:44.459873 | orchestrator | 2026-04-13 01:41:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:41:47.515034 | orchestrator | 2026-04-13 01:41:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:41:47.516856 | orchestrator | 2026-04-13 01:41:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:41:47.517000 | orchestrator | 2026-04-13 01:41:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:41:50.567438 | orchestrator | 2026-04-13 01:41:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:41:50.570589 | orchestrator | 2026-04-13 01:41:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:41:50.570630 | orchestrator | 2026-04-13 01:41:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:41:53.623464 | orchestrator | 2026-04-13 01:41:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:41:53.625815 | orchestrator | 2026-04-13 01:41:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:41:53.625872 | orchestrator | 2026-04-13 01:41:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:41:56.675346 | orchestrator | 2026-04-13 01:41:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:41:56.676827 | orchestrator | 2026-04-13 01:41:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:41:56.676864 | orchestrator | 2026-04-13 01:41:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:41:59.727619 | orchestrator | 2026-04-13 01:41:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:41:59.729535 | orchestrator | 2026-04-13 01:41:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:41:59.729596 | orchestrator | 2026-04-13 01:41:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:42:02.788957 | orchestrator | 2026-04-13 01:42:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:42:02.789301 | orchestrator | 2026-04-13 01:42:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:42:02.789686 | orchestrator | 2026-04-13 01:42:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:42:05.844321 | orchestrator | 2026-04-13 01:42:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:42:05.846657 | orchestrator | 2026-04-13 01:42:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:42:05.846732 | orchestrator | 2026-04-13 01:42:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:42:08.899634 | orchestrator | 2026-04-13 01:42:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:42:08.902081 | orchestrator | 2026-04-13 01:42:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:42:08.902122 | orchestrator | 2026-04-13 01:42:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:42:11.950663 | orchestrator | 2026-04-13 01:42:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:42:11.953022 | orchestrator | 2026-04-13 01:42:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:42:11.953055 | orchestrator | 2026-04-13 01:42:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:42:15.001712 | orchestrator | 2026-04-13 01:42:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:42:15.008664 | orchestrator | 2026-04-13 01:42:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:42:15.008722 | orchestrator | 2026-04-13 01:42:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:42:18.057446 | orchestrator | 2026-04-13 01:42:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:42:18.059159 | orchestrator | 2026-04-13 01:42:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:42:18.059222 | orchestrator | 2026-04-13 01:42:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:42:21.108436 | orchestrator | 2026-04-13 01:42:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:42:21.674156 | orchestrator | 2026-04-13 01:42:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:42:21.674250 | orchestrator | 2026-04-13 01:42:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:42:24.162725 | orchestrator | 2026-04-13 01:42:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:42:24.164258 | orchestrator | 2026-04-13 01:42:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:42:24.164324 | orchestrator | 2026-04-13 01:42:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:42:27.210588 | orchestrator | 2026-04-13 01:42:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:42:27.212130 | orchestrator | 2026-04-13 01:42:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:42:27.212156 | orchestrator | 2026-04-13 01:42:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:42:30.271458 | orchestrator | 2026-04-13 01:42:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:42:30.273023 | orchestrator | 2026-04-13 01:42:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:42:30.273159 | orchestrator | 2026-04-13 01:42:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:42:33.328512 | orchestrator | 2026-04-13 01:42:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:42:33.329749 | orchestrator | 2026-04-13 01:42:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:42:33.329781 | orchestrator | 2026-04-13 01:42:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:42:36.384018 | orchestrator | 2026-04-13 01:42:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:42:36.384115 | orchestrator | 2026-04-13 01:42:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:42:36.384130 | orchestrator | 2026-04-13 01:42:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:42:39.437049 | orchestrator | 2026-04-13 01:42:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:42:39.438855 | orchestrator | 2026-04-13 01:42:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:42:39.438889 | orchestrator | 2026-04-13 01:42:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:42:42.490322 | orchestrator | 2026-04-13 01:42:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:42:42.683798 | orchestrator | 2026-04-13 01:42:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:42:42.683870 | orchestrator | 2026-04-13 01:42:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:42:45.542525 | orchestrator | 2026-04-13 01:42:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:42:45.543626 | orchestrator | 2026-04-13 01:42:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:42:45.543673 | orchestrator | 2026-04-13 01:42:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:42:48.593481 | orchestrator | 2026-04-13 01:42:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:42:48.594161 | orchestrator | 2026-04-13 01:42:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:42:48.594218 | orchestrator | 2026-04-13 01:42:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:42:51.650395 | orchestrator | 2026-04-13 01:42:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:42:51.651574 | orchestrator | 2026-04-13 01:42:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:42:51.651610 | orchestrator | 2026-04-13 01:42:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:42:54.700752 | orchestrator | 2026-04-13 01:42:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:42:54.702906 | orchestrator | 2026-04-13 01:42:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:42:54.702998 | orchestrator | 2026-04-13 01:42:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:42:57.757457 | orchestrator | 2026-04-13 01:42:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:42:57.758993 | orchestrator | 2026-04-13 01:42:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:42:57.759698 | orchestrator | 2026-04-13 01:42:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:43:00.812773 | orchestrator | 2026-04-13 01:43:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:43:00.814976 | orchestrator | 2026-04-13 01:43:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:43:00.815036 | orchestrator | 2026-04-13 01:43:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:43:03.863530 | orchestrator | 2026-04-13 01:43:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:43:03.864991 | orchestrator | 2026-04-13 01:43:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:43:03.865092 | orchestrator | 2026-04-13 01:43:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:43:06.919358 | orchestrator | 2026-04-13 01:43:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:43:06.923152 | orchestrator | 2026-04-13 01:43:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:43:06.923248 | orchestrator | 2026-04-13 01:43:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:43:09.978804 | orchestrator | 2026-04-13 01:43:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:43:09.980439 | orchestrator | 2026-04-13 01:43:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:43:09.980480 | orchestrator | 2026-04-13 01:43:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:43:13.040954 | orchestrator | 2026-04-13 01:43:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:43:13.042987 | orchestrator | 2026-04-13 01:43:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:43:13.043140 | orchestrator | 2026-04-13 01:43:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:43:16.104124 | orchestrator | 2026-04-13 01:43:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:43:16.106463 | orchestrator | 2026-04-13 01:43:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:43:16.106518 | orchestrator | 2026-04-13 01:43:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:43:19.156772 | orchestrator | 2026-04-13 01:43:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:43:19.158465 | orchestrator | 2026-04-13 01:43:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:43:19.158522 | orchestrator | 2026-04-13 01:43:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:43:22.213267 | orchestrator | 2026-04-13 01:43:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:43:22.215235 | orchestrator | 2026-04-13 01:43:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:43:22.215299 | orchestrator | 2026-04-13 01:43:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:43:25.259728 | orchestrator | 2026-04-13 01:43:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:43:25.260484 | orchestrator | 2026-04-13 01:43:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:43:25.260532 | orchestrator | 2026-04-13 01:43:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:43:28.307104 | orchestrator | 2026-04-13 01:43:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:43:28.308481 | orchestrator | 2026-04-13 01:43:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:43:28.308546 | orchestrator | 2026-04-13 01:43:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:43:31.361323 | orchestrator | 2026-04-13 01:43:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:43:31.364574 | orchestrator | 2026-04-13 01:43:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:43:31.364785 | orchestrator | 2026-04-13 01:43:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:43:34.415148 | orchestrator | 2026-04-13 01:43:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:43:34.418002 | orchestrator | 2026-04-13 01:43:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:43:34.418124 | orchestrator | 2026-04-13 01:43:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:43:37.465341 | orchestrator | 2026-04-13 01:43:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:43:37.467069 | orchestrator | 2026-04-13 01:43:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:43:37.467142 | orchestrator | 2026-04-13 01:43:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:43:40.514243 | orchestrator | 2026-04-13 01:43:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:43:40.515535 | orchestrator | 2026-04-13 01:43:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:43:40.515572 | orchestrator | 2026-04-13 01:43:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:43:43.565281 | orchestrator | 2026-04-13 01:43:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:43:43.566852 | orchestrator | 2026-04-13 01:43:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:43:43.566899 | orchestrator | 2026-04-13 01:43:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:43:46.622175 | orchestrator | 2026-04-13 01:43:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:43:46.623886 | orchestrator | 2026-04-13 01:43:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:43:46.624004 | orchestrator | 2026-04-13 01:43:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:43:49.670827 | orchestrator | 2026-04-13 01:43:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:43:49.673014 | orchestrator | 2026-04-13 01:43:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:43:49.673127 | orchestrator | 2026-04-13 01:43:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:43:52.725891 | orchestrator | 2026-04-13 01:43:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:43:52.728171 | orchestrator | 2026-04-13 01:43:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:43:52.728351 | orchestrator | 2026-04-13 01:43:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:43:55.784264 | orchestrator | 2026-04-13 01:43:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:43:56.200115 | orchestrator | 2026-04-13 01:43:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:43:56.200211 | orchestrator | 2026-04-13 01:43:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:43:58.838333 | orchestrator | 2026-04-13 01:43:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:43:58.839085 | orchestrator | 2026-04-13 01:43:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:43:58.839131 | orchestrator | 2026-04-13 01:43:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:44:01.893022 | orchestrator | 2026-04-13 01:44:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:44:01.894766 | orchestrator | 2026-04-13 01:44:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:44:01.894811 | orchestrator | 2026-04-13 01:44:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:44:04.941932 | orchestrator | 2026-04-13 01:44:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:44:04.943925 | orchestrator | 2026-04-13 01:44:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:44:04.944076 | orchestrator | 2026-04-13 01:44:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:44:07.992382 | orchestrator | 2026-04-13 01:44:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:44:07.995249 | orchestrator | 2026-04-13 01:44:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:44:07.995387 | orchestrator | 2026-04-13 01:44:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:44:11.044346 | orchestrator | 2026-04-13 01:44:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:44:11.045513 | orchestrator | 2026-04-13 01:44:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:44:11.045572 | orchestrator | 2026-04-13 01:44:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:44:14.092701 | orchestrator | 2026-04-13 01:44:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:44:14.093973 | orchestrator | 2026-04-13 01:44:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:44:14.094084 | orchestrator | 2026-04-13 01:44:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:44:17.148803 | orchestrator | 2026-04-13 01:44:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:44:17.151705 | orchestrator | 2026-04-13 01:44:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:44:17.151767 | orchestrator | 2026-04-13 01:44:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:44:20.199006 | orchestrator | 2026-04-13 01:44:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:44:20.200729 | orchestrator | 2026-04-13 01:44:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:44:20.200766 | orchestrator | 2026-04-13 01:44:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:44:23.242675 | orchestrator | 2026-04-13 01:44:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:44:23.242946 | orchestrator | 2026-04-13 01:44:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:44:23.242996 | orchestrator | 2026-04-13 01:44:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:44:26.290489 | orchestrator | 2026-04-13 01:44:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:44:26.291864 | orchestrator | 2026-04-13 01:44:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:44:26.291991 | orchestrator | 2026-04-13 01:44:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:44:29.336165 | orchestrator | 2026-04-13 01:44:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:44:29.337261 | orchestrator | 2026-04-13 01:44:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:44:29.337322 | orchestrator | 2026-04-13 01:44:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:44:32.386405 | orchestrator | 2026-04-13 01:44:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:44:32.387763 | orchestrator | 2026-04-13 01:44:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:44:32.387822 | orchestrator | 2026-04-13 01:44:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:44:35.440924 | orchestrator | 2026-04-13 01:44:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:44:35.443900 | orchestrator | 2026-04-13 01:44:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:44:35.443949 | orchestrator | 2026-04-13 01:44:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:44:38.494956 | orchestrator | 2026-04-13 01:44:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:44:38.496561 | orchestrator | 2026-04-13 01:44:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:44:38.496613 | orchestrator | 2026-04-13 01:44:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:44:41.548981 | orchestrator | 2026-04-13 01:44:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:44:41.550650 | orchestrator | 2026-04-13 01:44:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:44:41.550709 | orchestrator | 2026-04-13 01:44:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:44:44.594927 | orchestrator | 2026-04-13 01:44:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:44:44.596339 | orchestrator | 2026-04-13 01:44:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:44:44.596441 | orchestrator | 2026-04-13 01:44:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:44:47.641808 | orchestrator | 2026-04-13 01:44:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:44:47.644565 | orchestrator | 2026-04-13 01:44:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:44:47.644627 | orchestrator | 2026-04-13 01:44:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:44:50.696464 | orchestrator | 2026-04-13 01:44:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:44:50.699449 | orchestrator | 2026-04-13 01:44:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:44:50.699569 | orchestrator | 2026-04-13 01:44:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:44:53.753465 | orchestrator | 2026-04-13 01:44:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:44:53.755790 | orchestrator | 2026-04-13 01:44:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:44:53.755836 | orchestrator | 2026-04-13 01:44:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:44:56.805302 | orchestrator | 2026-04-13 01:44:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:44:56.806597 | orchestrator | 2026-04-13 01:44:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:44:56.806629 | orchestrator | 2026-04-13 01:44:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:44:59.857442 | orchestrator | 2026-04-13 01:44:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:44:59.859750 | orchestrator | 2026-04-13 01:44:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:44:59.859809 | orchestrator | 2026-04-13 01:44:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:45:02.912338 | orchestrator | 2026-04-13 01:45:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:45:02.913272 | orchestrator | 2026-04-13 01:45:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:45:02.913322 | orchestrator | 2026-04-13 01:45:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:45:05.963212 | orchestrator | 2026-04-13 01:45:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:45:05.965612 | orchestrator | 2026-04-13 01:45:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:45:05.965883 | orchestrator | 2026-04-13 01:45:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:45:09.017128 | orchestrator | 2026-04-13 01:45:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:45:09.020934 | orchestrator | 2026-04-13 01:45:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:45:09.021008 | orchestrator | 2026-04-13 01:45:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:45:12.068677 | orchestrator | 2026-04-13 01:45:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:45:12.069600 | orchestrator | 2026-04-13 01:45:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:45:12.069634 | orchestrator | 2026-04-13 01:45:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:45:15.121157 | orchestrator | 2026-04-13 01:45:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:45:15.123095 | orchestrator | 2026-04-13 01:45:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:45:15.123174 | orchestrator | 2026-04-13 01:45:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:45:18.177624 | orchestrator | 2026-04-13 01:45:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:45:18.178104 | orchestrator | 2026-04-13 01:45:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:45:18.178139 | orchestrator | 2026-04-13 01:45:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:45:21.229089 | orchestrator | 2026-04-13 01:45:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:45:21.229417 | orchestrator | 2026-04-13 01:45:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:45:21.229460 | orchestrator | 2026-04-13 01:45:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:45:24.288329 | orchestrator | 2026-04-13 01:45:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:45:24.289649 | orchestrator | 2026-04-13 01:45:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:45:24.289774 | orchestrator | 2026-04-13 01:45:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:45:27.339099 | orchestrator | 2026-04-13 01:45:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:45:27.340888 | orchestrator | 2026-04-13 01:45:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:45:27.340933 | orchestrator | 2026-04-13 01:45:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:45:30.383644 | orchestrator | 2026-04-13 01:45:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:45:30.385795 | orchestrator | 2026-04-13 01:45:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:45:30.385858 | orchestrator | 2026-04-13 01:45:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:45:33.429850 | orchestrator | 2026-04-13 01:45:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:45:33.431101 | orchestrator | 2026-04-13 01:45:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:45:33.431153 | orchestrator | 2026-04-13 01:45:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:45:36.479590 | orchestrator | 2026-04-13 01:45:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:45:36.479858 | orchestrator | 2026-04-13 01:45:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:45:36.479896 | orchestrator | 2026-04-13 01:45:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:45:39.526438 | orchestrator | 2026-04-13 01:45:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:45:39.527335 | orchestrator | 2026-04-13 01:45:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:45:39.527382 | orchestrator | 2026-04-13 01:45:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:45:42.579068 | orchestrator | 2026-04-13 01:45:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:45:42.582834 | orchestrator | 2026-04-13 01:45:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:45:42.582923 | orchestrator | 2026-04-13 01:45:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:45:45.628915 | orchestrator | 2026-04-13 01:45:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:45:45.630333 | orchestrator | 2026-04-13 01:45:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:45:45.630467 | orchestrator | 2026-04-13 01:45:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:45:48.683761 | orchestrator | 2026-04-13 01:45:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:45:48.686142 | orchestrator | 2026-04-13 01:45:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:45:48.686192 | orchestrator | 2026-04-13 01:45:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:45:51.739420 | orchestrator | 2026-04-13 01:45:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:45:51.740709 | orchestrator | 2026-04-13 01:45:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:45:51.740745 | orchestrator | 2026-04-13 01:45:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:45:54.787436 | orchestrator | 2026-04-13 01:45:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:45:54.788092 | orchestrator | 2026-04-13 01:45:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:45:54.788126 | orchestrator | 2026-04-13 01:45:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:45:57.842850 | orchestrator | 2026-04-13 01:45:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:45:57.848791 | orchestrator | 2026-04-13 01:45:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:45:57.848865 | orchestrator | 2026-04-13 01:45:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:46:00.901043 | orchestrator | 2026-04-13 01:46:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:46:00.904765 | orchestrator | 2026-04-13 01:46:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:46:00.904849 | orchestrator | 2026-04-13 01:46:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:46:03.962503 | orchestrator | 2026-04-13 01:46:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:46:03.964130 | orchestrator | 2026-04-13 01:46:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:46:03.964191 | orchestrator | 2026-04-13 01:46:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:46:07.021049 | orchestrator | 2026-04-13 01:46:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:46:07.024540 | orchestrator | 2026-04-13 01:46:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:46:07.024615 | orchestrator | 2026-04-13 01:46:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:46:10.083377 | orchestrator | 2026-04-13 01:46:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:46:10.087311 | orchestrator | 2026-04-13 01:46:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:46:10.087391 | orchestrator | 2026-04-13 01:46:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:46:13.140335 | orchestrator | 2026-04-13 01:46:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:46:13.141759 | orchestrator | 2026-04-13 01:46:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:46:13.141785 | orchestrator | 2026-04-13 01:46:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:46:16.194094 | orchestrator | 2026-04-13 01:46:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:46:16.197603 | orchestrator | 2026-04-13 01:46:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:46:16.197741 | orchestrator | 2026-04-13 01:46:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:46:19.248173 | orchestrator | 2026-04-13 01:46:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:46:19.251178 | orchestrator | 2026-04-13 01:46:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:46:19.251253 | orchestrator | 2026-04-13 01:46:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:46:22.303917 | orchestrator | 2026-04-13 01:46:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:46:22.306225 | orchestrator | 2026-04-13 01:46:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:46:22.306325 | orchestrator | 2026-04-13 01:46:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:46:25.353319 | orchestrator | 2026-04-13 01:46:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:46:25.354599 | orchestrator | 2026-04-13 01:46:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:46:25.354643 | orchestrator | 2026-04-13 01:46:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:46:28.405797 | orchestrator | 2026-04-13 01:46:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:46:28.407538 | orchestrator | 2026-04-13 01:46:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:46:28.407588 | orchestrator | 2026-04-13 01:46:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:46:31.459329 | orchestrator | 2026-04-13 01:46:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:46:31.460488 | orchestrator | 2026-04-13 01:46:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:46:31.460596 | orchestrator | 2026-04-13 01:46:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:46:34.510476 | orchestrator | 2026-04-13 01:46:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:46:34.513713 | orchestrator | 2026-04-13 01:46:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:46:34.513813 | orchestrator | 2026-04-13 01:46:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:46:37.559776 | orchestrator | 2026-04-13 01:46:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:46:37.560510 | orchestrator | 2026-04-13 01:46:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:46:37.560542 | orchestrator | 2026-04-13 01:46:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:46:40.614138 | orchestrator | 2026-04-13 01:46:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:46:40.615423 | orchestrator | 2026-04-13 01:46:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:46:40.615467 | orchestrator | 2026-04-13 01:46:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:46:43.663530 | orchestrator | 2026-04-13 01:46:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:46:43.664971 | orchestrator | 2026-04-13 01:46:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:46:43.665031 | orchestrator | 2026-04-13 01:46:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:46:46.720400 | orchestrator | 2026-04-13 01:46:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:46:46.721885 | orchestrator | 2026-04-13 01:46:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:46:46.721937 | orchestrator | 2026-04-13 01:46:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:46:49.784899 | orchestrator | 2026-04-13 01:46:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:46:49.789715 | orchestrator | 2026-04-13 01:46:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:46:49.789791 | orchestrator | 2026-04-13 01:46:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:46:52.834599 | orchestrator | 2026-04-13 01:46:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:46:52.835931 | orchestrator | 2026-04-13 01:46:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:46:52.835962 | orchestrator | 2026-04-13 01:46:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:46:55.884893 | orchestrator | 2026-04-13 01:46:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:46:55.886229 | orchestrator | 2026-04-13 01:46:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:46:55.886302 | orchestrator | 2026-04-13 01:46:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:46:58.939336 | orchestrator | 2026-04-13 01:46:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:46:58.940050 | orchestrator | 2026-04-13 01:46:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:46:58.940141 | orchestrator | 2026-04-13 01:46:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:47:01.989836 | orchestrator | 2026-04-13 01:47:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:47:01.991478 | orchestrator | 2026-04-13 01:47:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:47:01.991518 | orchestrator | 2026-04-13 01:47:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:47:05.048444 | orchestrator | 2026-04-13 01:47:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:47:05.049840 | orchestrator | 2026-04-13 01:47:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:47:05.049899 | orchestrator | 2026-04-13 01:47:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:47:08.095043 | orchestrator | 2026-04-13 01:47:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:47:08.096193 | orchestrator | 2026-04-13 01:47:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:47:08.096254 | orchestrator | 2026-04-13 01:47:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:47:11.143524 | orchestrator | 2026-04-13 01:47:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:47:11.146818 | orchestrator | 2026-04-13 01:47:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:47:11.146889 | orchestrator | 2026-04-13 01:47:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:47:14.190080 | orchestrator | 2026-04-13 01:47:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:47:14.191357 | orchestrator | 2026-04-13 01:47:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:47:14.191438 | orchestrator | 2026-04-13 01:47:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:47:17.241483 | orchestrator | 2026-04-13 01:47:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:47:17.243029 | orchestrator | 2026-04-13 01:47:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:47:17.243154 | orchestrator | 2026-04-13 01:47:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:47:20.300199 | orchestrator | 2026-04-13 01:47:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:47:20.302617 | orchestrator | 2026-04-13 01:47:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:47:20.302740 | orchestrator | 2026-04-13 01:47:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:47:23.345777 | orchestrator | 2026-04-13 01:47:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:47:23.346351 | orchestrator | 2026-04-13 01:47:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:47:23.346387 | orchestrator | 2026-04-13 01:47:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:47:26.402644 | orchestrator | 2026-04-13 01:47:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:47:26.405861 | orchestrator | 2026-04-13 01:47:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:47:26.405899 | orchestrator | 2026-04-13 01:47:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:47:29.453808 | orchestrator | 2026-04-13 01:47:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:47:29.454426 | orchestrator | 2026-04-13 01:47:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:47:29.454481 | orchestrator | 2026-04-13 01:47:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:47:32.499911 | orchestrator | 2026-04-13 01:47:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:47:32.501521 | orchestrator | 2026-04-13 01:47:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:47:32.501560 | orchestrator | 2026-04-13 01:47:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:47:35.547313 | orchestrator | 2026-04-13 01:47:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:47:35.547682 | orchestrator | 2026-04-13 01:47:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:47:35.547697 | orchestrator | 2026-04-13 01:47:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:47:38.593829 | orchestrator | 2026-04-13 01:47:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:47:38.595470 | orchestrator | 2026-04-13 01:47:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:47:38.595509 | orchestrator | 2026-04-13 01:47:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:47:41.643062 | orchestrator | 2026-04-13 01:47:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:47:41.647051 | orchestrator | 2026-04-13 01:47:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:47:41.647123 | orchestrator | 2026-04-13 01:47:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:47:44.699350 | orchestrator | 2026-04-13 01:47:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:47:44.700402 | orchestrator | 2026-04-13 01:47:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:47:44.700673 | orchestrator | 2026-04-13 01:47:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:47:47.751077 | orchestrator | 2026-04-13 01:47:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:47:47.752342 | orchestrator | 2026-04-13 01:47:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:47:47.752382 | orchestrator | 2026-04-13 01:47:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:47:50.804647 | orchestrator | 2026-04-13 01:47:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:47:50.805033 | orchestrator | 2026-04-13 01:47:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:47:50.805064 | orchestrator | 2026-04-13 01:47:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:47:53.849343 | orchestrator | 2026-04-13 01:47:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:47:53.851473 | orchestrator | 2026-04-13 01:47:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:47:53.851552 | orchestrator | 2026-04-13 01:47:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:47:56.902241 | orchestrator | 2026-04-13 01:47:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:47:56.903528 | orchestrator | 2026-04-13 01:47:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:47:56.903929 | orchestrator | 2026-04-13 01:47:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:47:59.956015 | orchestrator | 2026-04-13 01:47:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:47:59.959050 | orchestrator | 2026-04-13 01:47:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:47:59.959114 | orchestrator | 2026-04-13 01:47:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:48:03.002859 | orchestrator | 2026-04-13 01:48:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:48:03.005170 | orchestrator | 2026-04-13 01:48:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:48:03.005350 | orchestrator | 2026-04-13 01:48:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:48:06.054209 | orchestrator | 2026-04-13 01:48:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:48:06.058368 | orchestrator | 2026-04-13 01:48:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:48:06.058461 | orchestrator | 2026-04-13 01:48:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:48:09.113525 | orchestrator | 2026-04-13 01:48:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:48:09.116671 | orchestrator | 2026-04-13 01:48:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:48:09.116750 | orchestrator | 2026-04-13 01:48:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:48:12.166283 | orchestrator | 2026-04-13 01:48:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:48:12.167903 | orchestrator | 2026-04-13 01:48:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:48:12.167980 | orchestrator | 2026-04-13 01:48:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:48:15.230743 | orchestrator | 2026-04-13 01:48:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:48:15.234671 | orchestrator | 2026-04-13 01:48:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:48:15.234904 | orchestrator | 2026-04-13 01:48:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:48:18.283780 | orchestrator | 2026-04-13 01:48:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:48:18.286424 | orchestrator | 2026-04-13 01:48:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:48:18.286483 | orchestrator | 2026-04-13 01:48:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:48:21.339020 | orchestrator | 2026-04-13 01:48:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:48:21.340749 | orchestrator | 2026-04-13 01:48:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:48:21.340993 | orchestrator | 2026-04-13 01:48:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:48:24.389681 | orchestrator | 2026-04-13 01:48:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:48:24.390428 | orchestrator | 2026-04-13 01:48:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:48:24.390473 | orchestrator | 2026-04-13 01:48:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:48:27.433657 | orchestrator | 2026-04-13 01:48:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:48:27.435855 | orchestrator | 2026-04-13 01:48:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:48:27.435917 | orchestrator | 2026-04-13 01:48:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:48:30.485257 | orchestrator | 2026-04-13 01:48:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:48:30.487041 | orchestrator | 2026-04-13 01:48:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:48:30.487088 | orchestrator | 2026-04-13 01:48:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:48:33.535710 | orchestrator | 2026-04-13 01:48:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:48:33.537877 | orchestrator | 2026-04-13 01:48:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:48:33.537971 | orchestrator | 2026-04-13 01:48:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:48:36.590817 | orchestrator | 2026-04-13 01:48:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:48:36.592352 | orchestrator | 2026-04-13 01:48:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:48:36.592407 | orchestrator | 2026-04-13 01:48:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:48:39.642927 | orchestrator | 2026-04-13 01:48:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:48:39.644100 | orchestrator | 2026-04-13 01:48:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:48:39.644525 | orchestrator | 2026-04-13 01:48:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:48:42.695942 | orchestrator | 2026-04-13 01:48:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:48:42.698568 | orchestrator | 2026-04-13 01:48:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:48:42.698744 | orchestrator | 2026-04-13 01:48:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:48:45.750745 | orchestrator | 2026-04-13 01:48:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:48:45.751941 | orchestrator | 2026-04-13 01:48:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:48:45.751982 | orchestrator | 2026-04-13 01:48:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:48:48.802299 | orchestrator | 2026-04-13 01:48:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:48:48.804749 | orchestrator | 2026-04-13 01:48:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:48:48.804816 | orchestrator | 2026-04-13 01:48:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:48:51.860393 | orchestrator | 2026-04-13 01:48:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:48:51.860481 | orchestrator | 2026-04-13 01:48:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:48:51.860494 | orchestrator | 2026-04-13 01:48:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:48:54.910913 | orchestrator | 2026-04-13 01:48:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:48:54.913590 | orchestrator | 2026-04-13 01:48:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:48:54.913653 | orchestrator | 2026-04-13 01:48:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:48:57.965809 | orchestrator | 2026-04-13 01:48:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:48:57.968451 | orchestrator | 2026-04-13 01:48:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:48:57.968954 | orchestrator | 2026-04-13 01:48:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:49:01.016784 | orchestrator | 2026-04-13 01:49:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:49:01.017067 | orchestrator | 2026-04-13 01:49:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:49:01.017165 | orchestrator | 2026-04-13 01:49:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:49:04.058484 | orchestrator | 2026-04-13 01:49:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:49:04.060067 | orchestrator | 2026-04-13 01:49:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:49:04.060205 | orchestrator | 2026-04-13 01:49:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:49:07.104679 | orchestrator | 2026-04-13 01:49:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:49:07.106949 | orchestrator | 2026-04-13 01:49:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:49:07.106978 | orchestrator | 2026-04-13 01:49:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:49:10.156832 | orchestrator | 2026-04-13 01:49:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:49:10.160996 | orchestrator | 2026-04-13 01:49:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:49:10.161069 | orchestrator | 2026-04-13 01:49:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:49:13.212303 | orchestrator | 2026-04-13 01:49:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:49:13.215454 | orchestrator | 2026-04-13 01:49:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:49:13.215476 | orchestrator | 2026-04-13 01:49:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:49:16.273299 | orchestrator | 2026-04-13 01:49:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:49:16.274500 | orchestrator | 2026-04-13 01:49:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:49:16.274522 | orchestrator | 2026-04-13 01:49:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:49:19.324724 | orchestrator | 2026-04-13 01:49:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:49:19.326786 | orchestrator | 2026-04-13 01:49:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:49:19.326832 | orchestrator | 2026-04-13 01:49:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:49:22.391978 | orchestrator | 2026-04-13 01:49:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:49:22.393557 | orchestrator | 2026-04-13 01:49:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:49:22.393595 | orchestrator | 2026-04-13 01:49:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:49:25.449493 | orchestrator | 2026-04-13 01:49:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:49:25.451606 | orchestrator | 2026-04-13 01:49:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:49:25.451655 | orchestrator | 2026-04-13 01:49:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:49:28.505725 | orchestrator | 2026-04-13 01:49:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:49:28.507272 | orchestrator | 2026-04-13 01:49:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:49:28.507454 | orchestrator | 2026-04-13 01:49:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:49:31.555537 | orchestrator | 2026-04-13 01:49:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:49:31.556916 | orchestrator | 2026-04-13 01:49:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:49:31.556983 | orchestrator | 2026-04-13 01:49:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:49:34.607559 | orchestrator | 2026-04-13 01:49:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:49:34.608939 | orchestrator | 2026-04-13 01:49:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:49:34.608978 | orchestrator | 2026-04-13 01:49:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:49:37.663459 | orchestrator | 2026-04-13 01:49:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:49:37.664074 | orchestrator | 2026-04-13 01:49:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:49:37.664110 | orchestrator | 2026-04-13 01:49:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:49:40.715904 | orchestrator | 2026-04-13 01:49:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:49:40.717622 | orchestrator | 2026-04-13 01:49:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:49:40.717715 | orchestrator | 2026-04-13 01:49:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:49:43.771896 | orchestrator | 2026-04-13 01:49:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:49:43.773630 | orchestrator | 2026-04-13 01:49:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:49:43.773711 | orchestrator | 2026-04-13 01:49:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:49:46.824652 | orchestrator | 2026-04-13 01:49:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:49:46.827152 | orchestrator | 2026-04-13 01:49:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:49:46.827271 | orchestrator | 2026-04-13 01:49:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:49:49.875610 | orchestrator | 2026-04-13 01:49:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:49:49.875968 | orchestrator | 2026-04-13 01:49:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:49:49.876082 | orchestrator | 2026-04-13 01:49:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:49:52.929420 | orchestrator | 2026-04-13 01:49:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:49:52.930998 | orchestrator | 2026-04-13 01:49:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:49:52.931061 | orchestrator | 2026-04-13 01:49:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:49:55.989615 | orchestrator | 2026-04-13 01:49:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:49:55.991449 | orchestrator | 2026-04-13 01:49:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:49:55.991518 | orchestrator | 2026-04-13 01:49:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:49:59.045771 | orchestrator | 2026-04-13 01:49:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:49:59.047817 | orchestrator | 2026-04-13 01:49:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:49:59.047886 | orchestrator | 2026-04-13 01:49:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:50:02.098392 | orchestrator | 2026-04-13 01:50:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:50:02.100232 | orchestrator | 2026-04-13 01:50:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:50:02.100546 | orchestrator | 2026-04-13 01:50:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:50:05.153642 | orchestrator | 2026-04-13 01:50:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:50:05.154877 | orchestrator | 2026-04-13 01:50:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:50:05.154916 | orchestrator | 2026-04-13 01:50:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:50:08.206160 | orchestrator | 2026-04-13 01:50:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:50:08.207610 | orchestrator | 2026-04-13 01:50:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:50:08.207657 | orchestrator | 2026-04-13 01:50:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:50:11.257305 | orchestrator | 2026-04-13 01:50:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:50:11.258973 | orchestrator | 2026-04-13 01:50:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:50:11.259049 | orchestrator | 2026-04-13 01:50:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:50:14.306635 | orchestrator | 2026-04-13 01:50:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:50:14.308768 | orchestrator | 2026-04-13 01:50:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:50:14.308823 | orchestrator | 2026-04-13 01:50:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:50:17.358405 | orchestrator | 2026-04-13 01:50:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:50:17.359070 | orchestrator | 2026-04-13 01:50:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:50:17.359205 | orchestrator | 2026-04-13 01:50:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:50:20.416698 | orchestrator | 2026-04-13 01:50:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:50:20.418698 | orchestrator | 2026-04-13 01:50:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:50:20.419571 | orchestrator | 2026-04-13 01:50:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:50:23.470772 | orchestrator | 2026-04-13 01:50:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:50:23.472761 | orchestrator | 2026-04-13 01:50:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:50:23.472805 | orchestrator | 2026-04-13 01:50:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:50:26.522860 | orchestrator | 2026-04-13 01:50:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:50:26.524292 | orchestrator | 2026-04-13 01:50:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:50:26.524394 | orchestrator | 2026-04-13 01:50:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:50:29.570299 | orchestrator | 2026-04-13 01:50:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:50:29.572720 | orchestrator | 2026-04-13 01:50:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:50:29.572813 | orchestrator | 2026-04-13 01:50:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:50:32.623031 | orchestrator | 2026-04-13 01:50:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:50:32.625290 | orchestrator | 2026-04-13 01:50:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:50:32.625498 | orchestrator | 2026-04-13 01:50:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:50:35.669124 | orchestrator | 2026-04-13 01:50:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:50:35.670728 | orchestrator | 2026-04-13 01:50:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:50:35.670878 | orchestrator | 2026-04-13 01:50:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:50:38.720861 | orchestrator | 2026-04-13 01:50:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:50:38.723741 | orchestrator | 2026-04-13 01:50:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:50:38.723820 | orchestrator | 2026-04-13 01:50:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:50:41.776148 | orchestrator | 2026-04-13 01:50:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:50:41.777013 | orchestrator | 2026-04-13 01:50:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:50:41.777066 | orchestrator | 2026-04-13 01:50:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:50:44.816962 | orchestrator | 2026-04-13 01:50:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:50:44.819705 | orchestrator | 2026-04-13 01:50:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:50:44.819787 | orchestrator | 2026-04-13 01:50:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:50:47.871744 | orchestrator | 2026-04-13 01:50:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:50:47.872800 | orchestrator | 2026-04-13 01:50:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:50:47.872886 | orchestrator | 2026-04-13 01:50:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:50:50.920266 | orchestrator | 2026-04-13 01:50:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:50:50.922099 | orchestrator | 2026-04-13 01:50:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:50:50.922162 | orchestrator | 2026-04-13 01:50:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:50:53.976490 | orchestrator | 2026-04-13 01:50:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:52:54.091939 | orchestrator | 2026-04-13 01:52:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:52:54.092034 | orchestrator | 2026-04-13 01:52:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:52:57.144948 | orchestrator | 2026-04-13 01:52:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:52:57.146319 | orchestrator | 2026-04-13 01:52:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:52:57.146403 | orchestrator | 2026-04-13 01:52:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:53:00.196875 | orchestrator | 2026-04-13 01:53:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:53:00.198878 | orchestrator | 2026-04-13 01:53:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:53:00.198979 | orchestrator | 2026-04-13 01:53:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:53:03.241157 | orchestrator | 2026-04-13 01:53:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:53:03.243844 | orchestrator | 2026-04-13 01:53:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:53:03.243977 | orchestrator | 2026-04-13 01:53:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:53:06.292193 | orchestrator | 2026-04-13 01:53:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:53:06.294213 | orchestrator | 2026-04-13 01:53:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:53:06.294294 | orchestrator | 2026-04-13 01:53:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:53:09.342205 | orchestrator | 2026-04-13 01:53:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:53:09.344150 | orchestrator | 2026-04-13 01:53:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:53:09.344210 | orchestrator | 2026-04-13 01:53:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:53:12.392918 | orchestrator | 2026-04-13 01:53:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:53:12.395784 | orchestrator | 2026-04-13 01:53:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:53:12.395871 | orchestrator | 2026-04-13 01:53:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:53:15.439568 | orchestrator | 2026-04-13 01:53:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:53:15.442439 | orchestrator | 2026-04-13 01:53:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:53:15.442628 | orchestrator | 2026-04-13 01:53:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:53:18.489860 | orchestrator | 2026-04-13 01:53:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:53:18.492041 | orchestrator | 2026-04-13 01:53:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:53:18.492135 | orchestrator | 2026-04-13 01:53:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:53:21.539458 | orchestrator | 2026-04-13 01:53:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:53:21.542216 | orchestrator | 2026-04-13 01:53:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:53:21.542395 | orchestrator | 2026-04-13 01:53:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:53:24.588722 | orchestrator | 2026-04-13 01:53:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:53:24.589714 | orchestrator | 2026-04-13 01:53:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:53:24.589746 | orchestrator | 2026-04-13 01:53:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:53:27.638332 | orchestrator | 2026-04-13 01:53:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:53:27.640594 | orchestrator | 2026-04-13 01:53:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:53:27.640647 | orchestrator | 2026-04-13 01:53:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:53:30.690566 | orchestrator | 2026-04-13 01:53:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:53:30.691621 | orchestrator | 2026-04-13 01:53:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:53:30.691654 | orchestrator | 2026-04-13 01:53:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:53:33.741311 | orchestrator | 2026-04-13 01:53:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:53:33.743459 | orchestrator | 2026-04-13 01:53:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:53:33.743545 | orchestrator | 2026-04-13 01:53:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:53:36.788538 | orchestrator | 2026-04-13 01:53:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:53:36.790787 | orchestrator | 2026-04-13 01:53:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:53:36.790830 | orchestrator | 2026-04-13 01:53:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:53:39.835287 | orchestrator | 2026-04-13 01:53:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:53:39.837588 | orchestrator | 2026-04-13 01:53:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:53:39.837677 | orchestrator | 2026-04-13 01:53:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:53:42.890533 | orchestrator | 2026-04-13 01:53:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:53:42.893236 | orchestrator | 2026-04-13 01:53:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:53:42.893293 | orchestrator | 2026-04-13 01:53:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:53:45.937581 | orchestrator | 2026-04-13 01:53:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:53:45.939584 | orchestrator | 2026-04-13 01:53:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:53:45.939638 | orchestrator | 2026-04-13 01:53:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:53:48.987426 | orchestrator | 2026-04-13 01:53:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:53:48.988637 | orchestrator | 2026-04-13 01:53:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:53:48.988680 | orchestrator | 2026-04-13 01:53:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:53:52.038869 | orchestrator | 2026-04-13 01:53:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:53:52.040228 | orchestrator | 2026-04-13 01:53:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:53:52.040265 | orchestrator | 2026-04-13 01:53:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:53:55.091153 | orchestrator | 2026-04-13 01:53:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:53:55.092604 | orchestrator | 2026-04-13 01:53:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:53:55.092638 | orchestrator | 2026-04-13 01:53:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:53:58.139660 | orchestrator | 2026-04-13 01:53:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:53:58.139885 | orchestrator | 2026-04-13 01:53:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:53:58.139911 | orchestrator | 2026-04-13 01:53:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:54:01.194525 | orchestrator | 2026-04-13 01:54:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:54:01.196831 | orchestrator | 2026-04-13 01:54:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:54:01.196883 | orchestrator | 2026-04-13 01:54:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:54:04.243729 | orchestrator | 2026-04-13 01:54:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:54:04.246602 | orchestrator | 2026-04-13 01:54:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:54:04.246661 | orchestrator | 2026-04-13 01:54:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:54:07.297971 | orchestrator | 2026-04-13 01:54:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:54:07.299198 | orchestrator | 2026-04-13 01:54:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:54:07.299367 | orchestrator | 2026-04-13 01:54:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:54:10.349961 | orchestrator | 2026-04-13 01:54:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:54:10.353240 | orchestrator | 2026-04-13 01:54:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:54:10.353307 | orchestrator | 2026-04-13 01:54:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:54:13.404742 | orchestrator | 2026-04-13 01:54:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:54:13.407053 | orchestrator | 2026-04-13 01:54:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:54:13.407093 | orchestrator | 2026-04-13 01:54:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:54:16.450741 | orchestrator | 2026-04-13 01:54:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:54:16.452485 | orchestrator | 2026-04-13 01:54:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:54:16.452546 | orchestrator | 2026-04-13 01:54:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:54:19.491741 | orchestrator | 2026-04-13 01:54:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:54:19.493301 | orchestrator | 2026-04-13 01:54:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:54:19.493344 | orchestrator | 2026-04-13 01:54:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:54:22.539579 | orchestrator | 2026-04-13 01:54:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:54:22.541363 | orchestrator | 2026-04-13 01:54:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:54:22.541539 | orchestrator | 2026-04-13 01:54:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:54:25.587287 | orchestrator | 2026-04-13 01:54:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:54:25.588929 | orchestrator | 2026-04-13 01:54:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:54:25.588997 | orchestrator | 2026-04-13 01:54:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:54:28.635033 | orchestrator | 2026-04-13 01:54:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:54:28.636594 | orchestrator | 2026-04-13 01:54:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:54:28.636642 | orchestrator | 2026-04-13 01:54:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:54:31.679243 | orchestrator | 2026-04-13 01:54:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:54:31.682212 | orchestrator | 2026-04-13 01:54:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:54:31.682303 | orchestrator | 2026-04-13 01:54:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:54:34.734217 | orchestrator | 2026-04-13 01:54:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:54:34.736695 | orchestrator | 2026-04-13 01:54:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:54:34.736777 | orchestrator | 2026-04-13 01:54:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:54:37.783629 | orchestrator | 2026-04-13 01:54:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:54:37.785717 | orchestrator | 2026-04-13 01:54:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:54:37.785755 | orchestrator | 2026-04-13 01:54:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:54:40.835294 | orchestrator | 2026-04-13 01:54:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:54:40.837277 | orchestrator | 2026-04-13 01:54:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:54:40.837359 | orchestrator | 2026-04-13 01:54:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:54:43.880982 | orchestrator | 2026-04-13 01:54:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:54:43.883090 | orchestrator | 2026-04-13 01:54:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:54:43.883135 | orchestrator | 2026-04-13 01:54:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:54:46.924484 | orchestrator | 2026-04-13 01:54:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:54:46.927237 | orchestrator | 2026-04-13 01:54:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:54:46.927316 | orchestrator | 2026-04-13 01:54:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:54:49.969303 | orchestrator | 2026-04-13 01:54:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:54:49.971241 | orchestrator | 2026-04-13 01:54:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:54:49.971429 | orchestrator | 2026-04-13 01:54:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:54:53.018496 | orchestrator | 2026-04-13 01:54:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:54:53.020657 | orchestrator | 2026-04-13 01:54:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:54:53.020697 | orchestrator | 2026-04-13 01:54:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:54:56.068669 | orchestrator | 2026-04-13 01:54:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:54:56.071276 | orchestrator | 2026-04-13 01:54:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:54:56.071329 | orchestrator | 2026-04-13 01:54:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:54:59.120931 | orchestrator | 2026-04-13 01:54:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:54:59.122000 | orchestrator | 2026-04-13 01:54:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:54:59.122116 | orchestrator | 2026-04-13 01:54:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:55:02.160713 | orchestrator | 2026-04-13 01:55:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:55:02.163965 | orchestrator | 2026-04-13 01:55:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:55:02.164022 | orchestrator | 2026-04-13 01:55:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:55:05.206324 | orchestrator | 2026-04-13 01:55:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:55:05.208126 | orchestrator | 2026-04-13 01:55:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:55:05.208658 | orchestrator | 2026-04-13 01:55:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:55:08.254968 | orchestrator | 2026-04-13 01:55:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:55:08.256252 | orchestrator | 2026-04-13 01:55:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:55:08.256315 | orchestrator | 2026-04-13 01:55:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:55:11.298172 | orchestrator | 2026-04-13 01:55:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:55:11.298351 | orchestrator | 2026-04-13 01:55:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:55:11.298366 | orchestrator | 2026-04-13 01:55:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:55:14.346295 | orchestrator | 2026-04-13 01:55:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:55:14.347219 | orchestrator | 2026-04-13 01:55:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:55:14.347265 | orchestrator | 2026-04-13 01:55:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:55:17.393097 | orchestrator | 2026-04-13 01:55:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:55:17.395191 | orchestrator | 2026-04-13 01:55:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:55:17.395250 | orchestrator | 2026-04-13 01:55:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:55:20.446175 | orchestrator | 2026-04-13 01:55:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:55:20.448900 | orchestrator | 2026-04-13 01:55:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:55:20.448969 | orchestrator | 2026-04-13 01:55:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:55:23.499112 | orchestrator | 2026-04-13 01:55:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:55:23.499210 | orchestrator | 2026-04-13 01:55:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:55:23.499225 | orchestrator | 2026-04-13 01:55:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:55:26.541522 | orchestrator | 2026-04-13 01:55:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:55:26.543400 | orchestrator | 2026-04-13 01:55:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:55:26.543563 | orchestrator | 2026-04-13 01:55:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:55:29.599159 | orchestrator | 2026-04-13 01:55:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:55:29.601668 | orchestrator | 2026-04-13 01:55:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:55:29.601739 | orchestrator | 2026-04-13 01:55:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:55:32.644172 | orchestrator | 2026-04-13 01:55:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:55:32.647204 | orchestrator | 2026-04-13 01:55:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:55:32.647257 | orchestrator | 2026-04-13 01:55:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:55:35.706142 | orchestrator | 2026-04-13 01:55:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:55:35.707683 | orchestrator | 2026-04-13 01:55:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:55:35.707721 | orchestrator | 2026-04-13 01:55:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:55:38.763457 | orchestrator | 2026-04-13 01:55:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:55:38.766836 | orchestrator | 2026-04-13 01:55:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:55:38.767027 | orchestrator | 2026-04-13 01:55:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:55:41.829792 | orchestrator | 2026-04-13 01:55:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:55:41.831560 | orchestrator | 2026-04-13 01:55:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:55:41.831627 | orchestrator | 2026-04-13 01:55:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:55:44.883242 | orchestrator | 2026-04-13 01:55:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:55:44.885281 | orchestrator | 2026-04-13 01:55:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:55:44.885310 | orchestrator | 2026-04-13 01:55:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:55:47.932195 | orchestrator | 2026-04-13 01:55:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:55:47.934814 | orchestrator | 2026-04-13 01:55:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:55:47.934864 | orchestrator | 2026-04-13 01:55:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:55:50.983143 | orchestrator | 2026-04-13 01:55:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:55:50.984462 | orchestrator | 2026-04-13 01:55:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:55:50.984487 | orchestrator | 2026-04-13 01:55:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:55:54.032798 | orchestrator | 2026-04-13 01:55:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:55:54.034689 | orchestrator | 2026-04-13 01:55:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:55:54.034742 | orchestrator | 2026-04-13 01:55:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:55:57.087210 | orchestrator | 2026-04-13 01:55:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:55:57.087667 | orchestrator | 2026-04-13 01:55:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:55:57.087698 | orchestrator | 2026-04-13 01:55:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:56:00.138402 | orchestrator | 2026-04-13 01:56:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:56:00.139484 | orchestrator | 2026-04-13 01:56:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:56:00.139637 | orchestrator | 2026-04-13 01:56:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:56:03.185042 | orchestrator | 2026-04-13 01:56:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:56:03.187384 | orchestrator | 2026-04-13 01:56:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:56:03.187525 | orchestrator | 2026-04-13 01:56:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:56:06.249319 | orchestrator | 2026-04-13 01:56:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:56:06.250897 | orchestrator | 2026-04-13 01:56:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:56:06.250971 | orchestrator | 2026-04-13 01:56:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:56:09.316652 | orchestrator | 2026-04-13 01:56:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:56:09.317566 | orchestrator | 2026-04-13 01:56:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:56:09.317614 | orchestrator | 2026-04-13 01:56:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:56:12.366175 | orchestrator | 2026-04-13 01:56:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:56:12.367734 | orchestrator | 2026-04-13 01:56:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:56:12.367784 | orchestrator | 2026-04-13 01:56:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:56:15.411779 | orchestrator | 2026-04-13 01:56:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:56:15.412568 | orchestrator | 2026-04-13 01:56:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:56:15.412616 | orchestrator | 2026-04-13 01:56:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:56:18.464959 | orchestrator | 2026-04-13 01:56:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:56:18.467796 | orchestrator | 2026-04-13 01:56:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:56:18.467856 | orchestrator | 2026-04-13 01:56:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:56:21.516346 | orchestrator | 2026-04-13 01:56:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:56:21.517751 | orchestrator | 2026-04-13 01:56:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:56:21.517788 | orchestrator | 2026-04-13 01:56:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:56:24.562771 | orchestrator | 2026-04-13 01:56:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:56:24.564066 | orchestrator | 2026-04-13 01:56:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:56:24.564100 | orchestrator | 2026-04-13 01:56:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:56:27.609605 | orchestrator | 2026-04-13 01:56:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:56:27.611053 | orchestrator | 2026-04-13 01:56:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:56:27.611106 | orchestrator | 2026-04-13 01:56:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:56:30.663792 | orchestrator | 2026-04-13 01:56:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:56:30.666106 | orchestrator | 2026-04-13 01:56:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:56:30.666144 | orchestrator | 2026-04-13 01:56:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:56:33.711961 | orchestrator | 2026-04-13 01:56:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:56:33.713497 | orchestrator | 2026-04-13 01:56:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:56:33.713536 | orchestrator | 2026-04-13 01:56:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:56:36.757098 | orchestrator | 2026-04-13 01:56:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:56:36.758883 | orchestrator | 2026-04-13 01:56:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:56:36.758982 | orchestrator | 2026-04-13 01:56:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:56:39.807924 | orchestrator | 2026-04-13 01:56:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:56:39.809528 | orchestrator | 2026-04-13 01:56:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:56:39.809569 | orchestrator | 2026-04-13 01:56:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:56:42.858134 | orchestrator | 2026-04-13 01:56:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:56:42.861007 | orchestrator | 2026-04-13 01:56:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:56:42.861070 | orchestrator | 2026-04-13 01:56:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:56:45.908875 | orchestrator | 2026-04-13 01:56:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:56:45.910004 | orchestrator | 2026-04-13 01:56:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:56:45.910114 | orchestrator | 2026-04-13 01:56:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:56:48.956293 | orchestrator | 2026-04-13 01:56:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:56:48.957224 | orchestrator | 2026-04-13 01:56:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:56:48.957263 | orchestrator | 2026-04-13 01:56:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:56:51.996296 | orchestrator | 2026-04-13 01:56:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:56:51.999561 | orchestrator | 2026-04-13 01:56:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:56:51.999657 | orchestrator | 2026-04-13 01:56:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:56:55.054303 | orchestrator | 2026-04-13 01:56:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:56:55.055408 | orchestrator | 2026-04-13 01:56:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:56:55.055447 | orchestrator | 2026-04-13 01:56:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:56:58.101437 | orchestrator | 2026-04-13 01:56:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:56:58.102925 | orchestrator | 2026-04-13 01:56:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:56:58.102980 | orchestrator | 2026-04-13 01:56:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:57:01.157821 | orchestrator | 2026-04-13 01:57:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:57:01.160060 | orchestrator | 2026-04-13 01:57:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:57:01.160119 | orchestrator | 2026-04-13 01:57:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:57:04.203641 | orchestrator | 2026-04-13 01:57:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:57:04.207194 | orchestrator | 2026-04-13 01:57:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:57:04.208081 | orchestrator | 2026-04-13 01:57:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:57:07.251283 | orchestrator | 2026-04-13 01:57:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:57:07.253027 | orchestrator | 2026-04-13 01:57:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:57:07.253066 | orchestrator | 2026-04-13 01:57:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:57:10.301970 | orchestrator | 2026-04-13 01:57:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:57:10.305023 | orchestrator | 2026-04-13 01:57:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:57:10.305142 | orchestrator | 2026-04-13 01:57:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:57:13.351572 | orchestrator | 2026-04-13 01:57:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:57:13.352699 | orchestrator | 2026-04-13 01:57:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:57:13.352988 | orchestrator | 2026-04-13 01:57:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:57:16.404589 | orchestrator | 2026-04-13 01:57:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:57:16.407337 | orchestrator | 2026-04-13 01:57:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:57:16.407390 | orchestrator | 2026-04-13 01:57:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:57:19.449777 | orchestrator | 2026-04-13 01:57:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:57:19.459062 | orchestrator | 2026-04-13 01:57:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:57:19.459151 | orchestrator | 2026-04-13 01:57:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:57:22.496506 | orchestrator | 2026-04-13 01:57:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:57:22.497815 | orchestrator | 2026-04-13 01:57:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:57:22.497861 | orchestrator | 2026-04-13 01:57:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:57:25.544453 | orchestrator | 2026-04-13 01:57:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:57:25.547387 | orchestrator | 2026-04-13 01:57:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:57:25.547442 | orchestrator | 2026-04-13 01:57:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:57:28.587584 | orchestrator | 2026-04-13 01:57:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:57:28.590339 | orchestrator | 2026-04-13 01:57:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:57:28.590394 | orchestrator | 2026-04-13 01:57:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:57:31.643185 | orchestrator | 2026-04-13 01:57:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:57:31.644942 | orchestrator | 2026-04-13 01:57:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:57:31.644985 | orchestrator | 2026-04-13 01:57:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:57:34.689380 | orchestrator | 2026-04-13 01:57:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:57:34.691669 | orchestrator | 2026-04-13 01:57:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:57:34.691908 | orchestrator | 2026-04-13 01:57:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:57:37.740736 | orchestrator | 2026-04-13 01:57:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:57:37.742606 | orchestrator | 2026-04-13 01:57:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:57:37.742769 | orchestrator | 2026-04-13 01:57:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:57:40.790096 | orchestrator | 2026-04-13 01:57:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:57:40.791609 | orchestrator | 2026-04-13 01:57:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:57:40.791710 | orchestrator | 2026-04-13 01:57:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:57:43.839428 | orchestrator | 2026-04-13 01:57:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:57:43.840657 | orchestrator | 2026-04-13 01:57:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:57:43.840698 | orchestrator | 2026-04-13 01:57:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:57:46.890104 | orchestrator | 2026-04-13 01:57:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:57:46.891115 | orchestrator | 2026-04-13 01:57:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:57:46.891299 | orchestrator | 2026-04-13 01:57:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:57:49.941443 | orchestrator | 2026-04-13 01:57:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:57:49.943797 | orchestrator | 2026-04-13 01:57:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:57:49.943879 | orchestrator | 2026-04-13 01:57:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:57:52.993685 | orchestrator | 2026-04-13 01:57:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:57:52.994165 | orchestrator | 2026-04-13 01:57:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:57:52.994201 | orchestrator | 2026-04-13 01:57:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:57:56.045669 | orchestrator | 2026-04-13 01:57:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:57:56.047452 | orchestrator | 2026-04-13 01:57:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:57:56.047509 | orchestrator | 2026-04-13 01:57:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:57:59.096094 | orchestrator | 2026-04-13 01:57:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:57:59.097396 | orchestrator | 2026-04-13 01:57:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:57:59.097445 | orchestrator | 2026-04-13 01:57:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:58:02.144584 | orchestrator | 2026-04-13 01:58:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:58:02.146731 | orchestrator | 2026-04-13 01:58:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:58:02.146831 | orchestrator | 2026-04-13 01:58:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:58:05.199132 | orchestrator | 2026-04-13 01:58:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:58:05.200332 | orchestrator | 2026-04-13 01:58:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:58:05.200481 | orchestrator | 2026-04-13 01:58:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:58:08.254795 | orchestrator | 2026-04-13 01:58:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:58:08.255731 | orchestrator | 2026-04-13 01:58:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:58:08.255756 | orchestrator | 2026-04-13 01:58:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:58:11.301804 | orchestrator | 2026-04-13 01:58:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:58:11.302417 | orchestrator | 2026-04-13 01:58:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:58:11.302958 | orchestrator | 2026-04-13 01:58:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:58:14.353935 | orchestrator | 2026-04-13 01:58:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:58:14.355872 | orchestrator | 2026-04-13 01:58:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:58:14.355944 | orchestrator | 2026-04-13 01:58:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:58:17.413795 | orchestrator | 2026-04-13 01:58:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:58:17.416307 | orchestrator | 2026-04-13 01:58:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:58:17.416367 | orchestrator | 2026-04-13 01:58:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:58:20.465284 | orchestrator | 2026-04-13 01:58:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:58:20.467769 | orchestrator | 2026-04-13 01:58:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:58:20.467808 | orchestrator | 2026-04-13 01:58:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:58:23.520575 | orchestrator | 2026-04-13 01:58:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:58:23.521745 | orchestrator | 2026-04-13 01:58:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:58:23.521855 | orchestrator | 2026-04-13 01:58:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:58:26.571232 | orchestrator | 2026-04-13 01:58:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:58:26.572112 | orchestrator | 2026-04-13 01:58:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:58:26.572168 | orchestrator | 2026-04-13 01:58:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:58:29.621312 | orchestrator | 2026-04-13 01:58:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:58:29.623418 | orchestrator | 2026-04-13 01:58:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:58:29.623759 | orchestrator | 2026-04-13 01:58:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:58:32.672762 | orchestrator | 2026-04-13 01:58:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:58:32.674870 | orchestrator | 2026-04-13 01:58:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:58:32.674933 | orchestrator | 2026-04-13 01:58:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:58:35.724919 | orchestrator | 2026-04-13 01:58:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:58:35.726970 | orchestrator | 2026-04-13 01:58:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:58:35.727331 | orchestrator | 2026-04-13 01:58:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:58:38.775916 | orchestrator | 2026-04-13 01:58:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:58:38.776145 | orchestrator | 2026-04-13 01:58:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:58:38.776196 | orchestrator | 2026-04-13 01:58:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:58:41.825877 | orchestrator | 2026-04-13 01:58:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:58:41.828042 | orchestrator | 2026-04-13 01:58:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:58:41.828115 | orchestrator | 2026-04-13 01:58:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:58:44.879034 | orchestrator | 2026-04-13 01:58:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:58:44.880486 | orchestrator | 2026-04-13 01:58:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:58:44.880517 | orchestrator | 2026-04-13 01:58:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:58:47.928176 | orchestrator | 2026-04-13 01:58:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:58:47.929488 | orchestrator | 2026-04-13 01:58:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:58:47.929532 | orchestrator | 2026-04-13 01:58:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:58:50.985703 | orchestrator | 2026-04-13 01:58:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:58:50.987778 | orchestrator | 2026-04-13 01:58:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:58:50.987828 | orchestrator | 2026-04-13 01:58:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:58:54.045589 | orchestrator | 2026-04-13 01:58:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:58:54.047549 | orchestrator | 2026-04-13 01:58:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:58:54.047604 | orchestrator | 2026-04-13 01:58:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:58:57.101368 | orchestrator | 2026-04-13 01:58:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:58:57.102087 | orchestrator | 2026-04-13 01:58:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:58:57.102305 | orchestrator | 2026-04-13 01:58:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:59:00.156394 | orchestrator | 2026-04-13 01:59:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:59:00.157288 | orchestrator | 2026-04-13 01:59:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:59:00.157324 | orchestrator | 2026-04-13 01:59:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:59:03.215476 | orchestrator | 2026-04-13 01:59:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:59:03.217227 | orchestrator | 2026-04-13 01:59:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:59:03.217284 | orchestrator | 2026-04-13 01:59:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:59:06.267083 | orchestrator | 2026-04-13 01:59:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:59:06.272281 | orchestrator | 2026-04-13 01:59:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:59:06.272341 | orchestrator | 2026-04-13 01:59:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:59:09.324847 | orchestrator | 2026-04-13 01:59:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:59:09.327297 | orchestrator | 2026-04-13 01:59:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:59:09.327362 | orchestrator | 2026-04-13 01:59:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:59:12.381916 | orchestrator | 2026-04-13 01:59:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:59:12.383752 | orchestrator | 2026-04-13 01:59:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:59:12.383792 | orchestrator | 2026-04-13 01:59:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:59:15.428501 | orchestrator | 2026-04-13 01:59:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:59:15.430094 | orchestrator | 2026-04-13 01:59:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:59:15.430156 | orchestrator | 2026-04-13 01:59:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:59:18.479011 | orchestrator | 2026-04-13 01:59:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:59:18.480207 | orchestrator | 2026-04-13 01:59:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:59:18.480431 | orchestrator | 2026-04-13 01:59:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:59:21.527912 | orchestrator | 2026-04-13 01:59:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:59:21.528608 | orchestrator | 2026-04-13 01:59:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:59:21.528648 | orchestrator | 2026-04-13 01:59:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:59:24.572344 | orchestrator | 2026-04-13 01:59:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:59:24.573367 | orchestrator | 2026-04-13 01:59:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:59:24.573431 | orchestrator | 2026-04-13 01:59:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:59:27.612227 | orchestrator | 2026-04-13 01:59:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:59:27.613522 | orchestrator | 2026-04-13 01:59:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:59:27.613618 | orchestrator | 2026-04-13 01:59:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:59:30.659516 | orchestrator | 2026-04-13 01:59:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:59:30.659715 | orchestrator | 2026-04-13 01:59:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:59:30.659741 | orchestrator | 2026-04-13 01:59:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:59:33.709518 | orchestrator | 2026-04-13 01:59:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:59:33.711400 | orchestrator | 2026-04-13 01:59:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:59:33.711594 | orchestrator | 2026-04-13 01:59:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:59:36.753370 | orchestrator | 2026-04-13 01:59:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:59:36.755173 | orchestrator | 2026-04-13 01:59:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:59:36.755218 | orchestrator | 2026-04-13 01:59:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:59:39.802436 | orchestrator | 2026-04-13 01:59:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:59:39.804888 | orchestrator | 2026-04-13 01:59:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:59:39.804932 | orchestrator | 2026-04-13 01:59:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:59:42.851589 | orchestrator | 2026-04-13 01:59:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:59:42.853702 | orchestrator | 2026-04-13 01:59:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:59:42.853737 | orchestrator | 2026-04-13 01:59:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:59:45.903849 | orchestrator | 2026-04-13 01:59:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:59:45.906281 | orchestrator | 2026-04-13 01:59:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:59:45.906329 | orchestrator | 2026-04-13 01:59:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:59:48.953910 | orchestrator | 2026-04-13 01:59:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:59:48.956894 | orchestrator | 2026-04-13 01:59:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:59:48.956919 | orchestrator | 2026-04-13 01:59:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:59:52.018836 | orchestrator | 2026-04-13 01:59:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:59:52.020434 | orchestrator | 2026-04-13 01:59:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:59:52.020495 | orchestrator | 2026-04-13 01:59:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:59:55.064203 | orchestrator | 2026-04-13 01:59:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:59:55.064841 | orchestrator | 2026-04-13 01:59:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:59:55.064861 | orchestrator | 2026-04-13 01:59:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 01:59:58.117410 | orchestrator | 2026-04-13 01:59:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 01:59:58.118299 | orchestrator | 2026-04-13 01:59:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 01:59:58.118331 | orchestrator | 2026-04-13 01:59:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:00:01.170088 | orchestrator | 2026-04-13 02:00:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:00:01.170700 | orchestrator | 2026-04-13 02:00:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:00:01.170975 | orchestrator | 2026-04-13 02:00:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:00:04.219507 | orchestrator | 2026-04-13 02:00:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:00:04.220684 | orchestrator | 2026-04-13 02:00:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:00:04.220714 | orchestrator | 2026-04-13 02:00:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:00:07.272622 | orchestrator | 2026-04-13 02:00:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:00:07.274786 | orchestrator | 2026-04-13 02:00:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:00:07.274834 | orchestrator | 2026-04-13 02:00:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:00:10.319304 | orchestrator | 2026-04-13 02:00:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:00:10.321946 | orchestrator | 2026-04-13 02:00:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:00:10.321993 | orchestrator | 2026-04-13 02:00:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:00:13.367405 | orchestrator | 2026-04-13 02:00:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:00:13.368665 | orchestrator | 2026-04-13 02:00:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:00:13.368720 | orchestrator | 2026-04-13 02:00:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:00:16.423577 | orchestrator | 2026-04-13 02:00:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:00:16.424561 | orchestrator | 2026-04-13 02:00:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:00:16.424598 | orchestrator | 2026-04-13 02:00:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:00:19.473974 | orchestrator | 2026-04-13 02:00:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:00:19.474513 | orchestrator | 2026-04-13 02:00:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:00:19.474559 | orchestrator | 2026-04-13 02:00:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:00:22.523414 | orchestrator | 2026-04-13 02:00:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:00:22.525231 | orchestrator | 2026-04-13 02:00:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:00:22.525267 | orchestrator | 2026-04-13 02:00:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:00:25.576685 | orchestrator | 2026-04-13 02:00:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:00:25.579610 | orchestrator | 2026-04-13 02:00:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:00:25.579668 | orchestrator | 2026-04-13 02:00:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:00:28.632332 | orchestrator | 2026-04-13 02:00:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:00:28.633671 | orchestrator | 2026-04-13 02:00:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:00:28.633821 | orchestrator | 2026-04-13 02:00:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:00:31.683739 | orchestrator | 2026-04-13 02:00:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:00:31.685381 | orchestrator | 2026-04-13 02:00:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:00:31.685431 | orchestrator | 2026-04-13 02:00:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:00:34.739549 | orchestrator | 2026-04-13 02:00:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:00:34.742250 | orchestrator | 2026-04-13 02:00:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:00:34.742317 | orchestrator | 2026-04-13 02:00:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:00:37.801556 | orchestrator | 2026-04-13 02:00:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:00:37.803182 | orchestrator | 2026-04-13 02:00:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:00:37.803215 | orchestrator | 2026-04-13 02:00:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:00:40.855646 | orchestrator | 2026-04-13 02:00:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:00:40.857322 | orchestrator | 2026-04-13 02:00:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:00:40.857378 | orchestrator | 2026-04-13 02:00:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:00:43.907774 | orchestrator | 2026-04-13 02:00:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:00:43.910256 | orchestrator | 2026-04-13 02:00:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:00:43.910297 | orchestrator | 2026-04-13 02:00:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:00:46.960161 | orchestrator | 2026-04-13 02:00:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:00:46.961296 | orchestrator | 2026-04-13 02:00:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:00:46.961317 | orchestrator | 2026-04-13 02:00:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:00:50.014805 | orchestrator | 2026-04-13 02:00:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:00:50.017420 | orchestrator | 2026-04-13 02:00:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:00:50.017609 | orchestrator | 2026-04-13 02:00:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:00:53.064475 | orchestrator | 2026-04-13 02:00:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:00:53.066434 | orchestrator | 2026-04-13 02:00:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:00:53.066660 | orchestrator | 2026-04-13 02:00:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:00:56.117606 | orchestrator | 2026-04-13 02:00:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:00:56.120238 | orchestrator | 2026-04-13 02:00:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:00:56.120274 | orchestrator | 2026-04-13 02:00:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:00:59.206715 | orchestrator | 2026-04-13 02:00:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:00:59.208170 | orchestrator | 2026-04-13 02:00:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:00:59.208214 | orchestrator | 2026-04-13 02:00:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:01:02.259360 | orchestrator | 2026-04-13 02:01:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:01:02.261905 | orchestrator | 2026-04-13 02:01:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:01:02.261953 | orchestrator | 2026-04-13 02:01:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:01:05.337043 | orchestrator | 2026-04-13 02:01:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:01:05.339812 | orchestrator | 2026-04-13 02:01:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:01:05.339886 | orchestrator | 2026-04-13 02:01:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:01:08.395396 | orchestrator | 2026-04-13 02:01:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:01:08.397584 | orchestrator | 2026-04-13 02:01:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:01:08.397632 | orchestrator | 2026-04-13 02:01:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:01:11.449609 | orchestrator | 2026-04-13 02:01:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:01:11.452367 | orchestrator | 2026-04-13 02:01:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:01:11.452457 | orchestrator | 2026-04-13 02:01:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:01:14.502389 | orchestrator | 2026-04-13 02:01:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:01:14.504614 | orchestrator | 2026-04-13 02:01:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:01:14.504671 | orchestrator | 2026-04-13 02:01:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:01:17.561431 | orchestrator | 2026-04-13 02:01:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:01:17.565572 | orchestrator | 2026-04-13 02:01:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:01:17.565607 | orchestrator | 2026-04-13 02:01:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:01:20.616386 | orchestrator | 2026-04-13 02:01:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:01:20.619373 | orchestrator | 2026-04-13 02:01:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:01:20.619424 | orchestrator | 2026-04-13 02:01:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:01:23.694278 | orchestrator | 2026-04-13 02:01:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:01:23.694361 | orchestrator | 2026-04-13 02:01:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:01:23.694371 | orchestrator | 2026-04-13 02:01:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:01:26.745089 | orchestrator | 2026-04-13 02:01:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:01:26.746458 | orchestrator | 2026-04-13 02:01:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:01:26.746534 | orchestrator | 2026-04-13 02:01:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:01:29.789159 | orchestrator | 2026-04-13 02:01:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:01:29.792335 | orchestrator | 2026-04-13 02:01:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:01:29.792394 | orchestrator | 2026-04-13 02:01:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:01:32.843242 | orchestrator | 2026-04-13 02:01:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:01:32.847230 | orchestrator | 2026-04-13 02:01:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:01:32.847359 | orchestrator | 2026-04-13 02:01:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:01:35.905274 | orchestrator | 2026-04-13 02:01:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:01:35.906470 | orchestrator | 2026-04-13 02:01:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:01:35.906488 | orchestrator | 2026-04-13 02:01:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:01:38.966480 | orchestrator | 2026-04-13 02:01:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:01:38.966600 | orchestrator | 2026-04-13 02:01:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:01:38.966631 | orchestrator | 2026-04-13 02:01:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:01:42.036511 | orchestrator | 2026-04-13 02:01:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:01:42.037169 | orchestrator | 2026-04-13 02:01:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:01:42.037200 | orchestrator | 2026-04-13 02:01:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:01:45.111767 | orchestrator | 2026-04-13 02:01:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:01:45.113087 | orchestrator | 2026-04-13 02:01:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:01:45.113199 | orchestrator | 2026-04-13 02:01:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:01:48.158344 | orchestrator | 2026-04-13 02:01:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:01:48.159474 | orchestrator | 2026-04-13 02:01:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:01:48.159523 | orchestrator | 2026-04-13 02:01:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:01:51.214274 | orchestrator | 2026-04-13 02:01:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:01:51.216169 | orchestrator | 2026-04-13 02:01:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:01:51.216255 | orchestrator | 2026-04-13 02:01:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:01:54.295526 | orchestrator | 2026-04-13 02:01:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:01:54.295742 | orchestrator | 2026-04-13 02:01:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:01:54.295766 | orchestrator | 2026-04-13 02:01:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:01:57.344757 | orchestrator | 2026-04-13 02:01:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:01:57.346828 | orchestrator | 2026-04-13 02:01:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:01:57.346889 | orchestrator | 2026-04-13 02:01:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:02:00.394005 | orchestrator | 2026-04-13 02:02:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:02:00.394632 | orchestrator | 2026-04-13 02:02:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:02:00.395087 | orchestrator | 2026-04-13 02:02:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:02:03.467606 | orchestrator | 2026-04-13 02:02:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:02:03.468633 | orchestrator | 2026-04-13 02:02:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:02:03.468658 | orchestrator | 2026-04-13 02:02:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:02:06.525227 | orchestrator | 2026-04-13 02:02:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:02:06.526995 | orchestrator | 2026-04-13 02:02:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:02:06.527276 | orchestrator | 2026-04-13 02:02:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:02:09.578335 | orchestrator | 2026-04-13 02:02:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:02:09.581587 | orchestrator | 2026-04-13 02:02:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:02:09.581749 | orchestrator | 2026-04-13 02:02:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:02:12.630507 | orchestrator | 2026-04-13 02:02:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:02:12.631399 | orchestrator | 2026-04-13 02:02:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:02:12.631709 | orchestrator | 2026-04-13 02:02:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:02:15.681093 | orchestrator | 2026-04-13 02:02:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:02:15.684314 | orchestrator | 2026-04-13 02:02:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:02:15.684356 | orchestrator | 2026-04-13 02:02:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:02:18.740606 | orchestrator | 2026-04-13 02:02:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:02:18.743363 | orchestrator | 2026-04-13 02:02:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:02:18.743409 | orchestrator | 2026-04-13 02:02:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:02:21.798483 | orchestrator | 2026-04-13 02:02:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:02:21.801052 | orchestrator | 2026-04-13 02:02:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:02:21.801102 | orchestrator | 2026-04-13 02:02:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:02:24.852188 | orchestrator | 2026-04-13 02:02:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:02:24.855038 | orchestrator | 2026-04-13 02:02:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:02:24.855273 | orchestrator | 2026-04-13 02:02:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:02:27.906243 | orchestrator | 2026-04-13 02:02:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:02:27.908022 | orchestrator | 2026-04-13 02:02:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:02:27.908069 | orchestrator | 2026-04-13 02:02:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:02:30.955200 | orchestrator | 2026-04-13 02:02:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:02:30.956221 | orchestrator | 2026-04-13 02:02:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:02:30.956299 | orchestrator | 2026-04-13 02:02:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:02:34.009535 | orchestrator | 2026-04-13 02:02:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:02:34.010966 | orchestrator | 2026-04-13 02:02:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:02:34.011137 | orchestrator | 2026-04-13 02:02:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:02:37.060595 | orchestrator | 2026-04-13 02:02:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:02:37.063201 | orchestrator | 2026-04-13 02:02:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:02:37.063308 | orchestrator | 2026-04-13 02:02:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:02:40.116723 | orchestrator | 2026-04-13 02:02:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:02:40.119116 | orchestrator | 2026-04-13 02:02:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:02:40.119154 | orchestrator | 2026-04-13 02:02:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:02:43.168113 | orchestrator | 2026-04-13 02:02:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:02:43.169521 | orchestrator | 2026-04-13 02:02:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:02:43.169549 | orchestrator | 2026-04-13 02:02:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:02:46.215339 | orchestrator | 2026-04-13 02:02:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:02:46.216598 | orchestrator | 2026-04-13 02:02:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:02:46.216633 | orchestrator | 2026-04-13 02:02:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:02:49.256381 | orchestrator | 2026-04-13 02:02:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:02:49.258711 | orchestrator | 2026-04-13 02:02:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:02:49.258752 | orchestrator | 2026-04-13 02:02:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:02:52.301444 | orchestrator | 2026-04-13 02:02:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:02:52.302775 | orchestrator | 2026-04-13 02:02:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:02:52.302878 | orchestrator | 2026-04-13 02:02:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:02:55.353806 | orchestrator | 2026-04-13 02:02:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:02:55.355458 | orchestrator | 2026-04-13 02:02:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:02:55.355497 | orchestrator | 2026-04-13 02:02:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:02:58.403287 | orchestrator | 2026-04-13 02:02:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:02:58.406595 | orchestrator | 2026-04-13 02:02:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:02:58.406651 | orchestrator | 2026-04-13 02:02:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:03:01.461837 | orchestrator | 2026-04-13 02:03:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:03:01.463353 | orchestrator | 2026-04-13 02:03:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:03:01.463462 | orchestrator | 2026-04-13 02:03:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:03:04.510194 | orchestrator | 2026-04-13 02:03:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:03:04.512916 | orchestrator | 2026-04-13 02:03:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:03:04.512965 | orchestrator | 2026-04-13 02:03:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:03:07.564818 | orchestrator | 2026-04-13 02:03:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:03:07.566446 | orchestrator | 2026-04-13 02:03:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:03:07.566495 | orchestrator | 2026-04-13 02:03:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:03:10.608396 | orchestrator | 2026-04-13 02:03:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:03:10.610257 | orchestrator | 2026-04-13 02:03:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:03:10.610339 | orchestrator | 2026-04-13 02:03:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:03:13.660241 | orchestrator | 2026-04-13 02:03:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:03:13.662141 | orchestrator | 2026-04-13 02:03:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:03:13.662189 | orchestrator | 2026-04-13 02:03:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:03:16.712275 | orchestrator | 2026-04-13 02:03:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:03:16.713182 | orchestrator | 2026-04-13 02:03:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:03:16.713227 | orchestrator | 2026-04-13 02:03:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:03:19.751673 | orchestrator | 2026-04-13 02:03:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:03:19.753696 | orchestrator | 2026-04-13 02:03:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:03:19.753758 | orchestrator | 2026-04-13 02:03:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:03:22.799654 | orchestrator | 2026-04-13 02:03:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:03:22.800047 | orchestrator | 2026-04-13 02:03:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:03:22.800262 | orchestrator | 2026-04-13 02:03:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:03:25.846556 | orchestrator | 2026-04-13 02:03:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:03:25.848569 | orchestrator | 2026-04-13 02:03:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:03:25.848650 | orchestrator | 2026-04-13 02:03:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:03:28.903485 | orchestrator | 2026-04-13 02:03:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:03:28.905191 | orchestrator | 2026-04-13 02:03:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:03:28.905238 | orchestrator | 2026-04-13 02:03:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:03:31.956914 | orchestrator | 2026-04-13 02:03:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:03:31.959207 | orchestrator | 2026-04-13 02:03:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:03:31.959243 | orchestrator | 2026-04-13 02:03:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:03:35.005404 | orchestrator | 2026-04-13 02:03:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:03:35.007944 | orchestrator | 2026-04-13 02:03:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:03:35.008525 | orchestrator | 2026-04-13 02:03:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:03:38.053509 | orchestrator | 2026-04-13 02:03:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:03:38.055241 | orchestrator | 2026-04-13 02:03:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:03:38.055323 | orchestrator | 2026-04-13 02:03:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:03:41.105610 | orchestrator | 2026-04-13 02:03:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:03:41.108147 | orchestrator | 2026-04-13 02:03:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:03:41.108183 | orchestrator | 2026-04-13 02:03:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:03:44.155817 | orchestrator | 2026-04-13 02:03:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:03:44.158172 | orchestrator | 2026-04-13 02:03:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:03:44.158222 | orchestrator | 2026-04-13 02:03:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:03:47.200699 | orchestrator | 2026-04-13 02:03:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:03:47.201805 | orchestrator | 2026-04-13 02:03:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:03:47.201828 | orchestrator | 2026-04-13 02:03:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:03:50.252033 | orchestrator | 2026-04-13 02:03:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:03:50.253702 | orchestrator | 2026-04-13 02:03:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:03:50.253762 | orchestrator | 2026-04-13 02:03:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:03:53.307349 | orchestrator | 2026-04-13 02:03:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:03:53.309408 | orchestrator | 2026-04-13 02:03:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:03:53.309465 | orchestrator | 2026-04-13 02:03:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:03:56.360638 | orchestrator | 2026-04-13 02:03:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:03:56.362999 | orchestrator | 2026-04-13 02:03:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:03:56.363091 | orchestrator | 2026-04-13 02:03:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:03:59.409569 | orchestrator | 2026-04-13 02:03:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:03:59.411419 | orchestrator | 2026-04-13 02:03:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:03:59.411469 | orchestrator | 2026-04-13 02:03:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:04:02.462983 | orchestrator | 2026-04-13 02:04:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:04:02.464296 | orchestrator | 2026-04-13 02:04:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:04:02.464406 | orchestrator | 2026-04-13 02:04:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:04:05.521032 | orchestrator | 2026-04-13 02:04:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:04:05.523738 | orchestrator | 2026-04-13 02:04:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:04:05.524107 | orchestrator | 2026-04-13 02:04:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:04:08.569373 | orchestrator | 2026-04-13 02:04:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:04:08.570163 | orchestrator | 2026-04-13 02:04:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:04:08.570262 | orchestrator | 2026-04-13 02:04:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:04:11.620969 | orchestrator | 2026-04-13 02:04:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:04:11.622211 | orchestrator | 2026-04-13 02:04:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:04:11.622235 | orchestrator | 2026-04-13 02:04:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:04:14.669118 | orchestrator | 2026-04-13 02:04:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:04:14.671812 | orchestrator | 2026-04-13 02:04:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:04:14.671882 | orchestrator | 2026-04-13 02:04:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:04:17.726255 | orchestrator | 2026-04-13 02:04:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:04:17.728794 | orchestrator | 2026-04-13 02:04:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:04:17.728866 | orchestrator | 2026-04-13 02:04:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:04:20.785579 | orchestrator | 2026-04-13 02:04:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:04:20.787397 | orchestrator | 2026-04-13 02:04:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:04:20.787439 | orchestrator | 2026-04-13 02:04:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:04:23.838592 | orchestrator | 2026-04-13 02:04:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:04:23.840982 | orchestrator | 2026-04-13 02:04:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:04:23.841064 | orchestrator | 2026-04-13 02:04:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:04:26.885725 | orchestrator | 2026-04-13 02:04:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:04:26.886732 | orchestrator | 2026-04-13 02:04:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:04:26.886758 | orchestrator | 2026-04-13 02:04:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:04:29.932531 | orchestrator | 2026-04-13 02:04:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:04:29.935363 | orchestrator | 2026-04-13 02:04:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:04:29.935559 | orchestrator | 2026-04-13 02:04:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:04:32.990118 | orchestrator | 2026-04-13 02:04:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:04:32.991879 | orchestrator | 2026-04-13 02:04:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:04:32.991930 | orchestrator | 2026-04-13 02:04:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:04:36.038378 | orchestrator | 2026-04-13 02:04:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:04:36.039419 | orchestrator | 2026-04-13 02:04:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:04:36.039465 | orchestrator | 2026-04-13 02:04:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:04:39.090460 | orchestrator | 2026-04-13 02:04:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:04:39.092019 | orchestrator | 2026-04-13 02:04:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:04:39.092055 | orchestrator | 2026-04-13 02:04:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:04:42.142316 | orchestrator | 2026-04-13 02:04:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:04:42.144377 | orchestrator | 2026-04-13 02:04:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:04:42.144755 | orchestrator | 2026-04-13 02:04:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:04:45.191177 | orchestrator | 2026-04-13 02:04:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:04:45.193721 | orchestrator | 2026-04-13 02:04:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:04:45.193768 | orchestrator | 2026-04-13 02:04:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:04:48.234414 | orchestrator | 2026-04-13 02:04:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:04:48.235783 | orchestrator | 2026-04-13 02:04:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:04:48.235884 | orchestrator | 2026-04-13 02:04:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:04:51.287923 | orchestrator | 2026-04-13 02:04:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:04:51.290066 | orchestrator | 2026-04-13 02:04:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:04:51.290099 | orchestrator | 2026-04-13 02:04:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:04:54.337367 | orchestrator | 2026-04-13 02:04:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:04:54.339367 | orchestrator | 2026-04-13 02:04:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:04:54.339427 | orchestrator | 2026-04-13 02:04:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:04:57.391573 | orchestrator | 2026-04-13 02:04:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:04:57.393721 | orchestrator | 2026-04-13 02:04:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:04:57.393775 | orchestrator | 2026-04-13 02:04:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:05:00.442526 | orchestrator | 2026-04-13 02:05:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:05:00.446537 | orchestrator | 2026-04-13 02:05:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:05:00.446718 | orchestrator | 2026-04-13 02:05:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:05:03.494123 | orchestrator | 2026-04-13 02:05:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:05:03.495009 | orchestrator | 2026-04-13 02:05:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:05:03.495038 | orchestrator | 2026-04-13 02:05:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:05:06.545602 | orchestrator | 2026-04-13 02:05:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:05:06.547028 | orchestrator | 2026-04-13 02:05:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:05:06.547066 | orchestrator | 2026-04-13 02:05:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:05:09.590419 | orchestrator | 2026-04-13 02:05:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:05:09.592623 | orchestrator | 2026-04-13 02:05:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:05:09.592670 | orchestrator | 2026-04-13 02:05:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:05:12.642238 | orchestrator | 2026-04-13 02:05:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:05:12.644084 | orchestrator | 2026-04-13 02:05:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:05:12.644103 | orchestrator | 2026-04-13 02:05:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:05:15.685985 | orchestrator | 2026-04-13 02:05:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:05:15.687235 | orchestrator | 2026-04-13 02:05:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:05:15.687283 | orchestrator | 2026-04-13 02:05:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:05:18.739170 | orchestrator | 2026-04-13 02:05:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:05:18.740524 | orchestrator | 2026-04-13 02:05:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:05:18.740619 | orchestrator | 2026-04-13 02:05:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:05:21.793776 | orchestrator | 2026-04-13 02:05:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:05:21.796729 | orchestrator | 2026-04-13 02:05:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:05:21.796782 | orchestrator | 2026-04-13 02:05:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:05:24.842417 | orchestrator | 2026-04-13 02:05:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:05:24.843769 | orchestrator | 2026-04-13 02:05:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:05:24.843834 | orchestrator | 2026-04-13 02:05:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:05:27.892638 | orchestrator | 2026-04-13 02:05:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:05:27.894010 | orchestrator | 2026-04-13 02:05:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:05:27.894102 | orchestrator | 2026-04-13 02:05:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:05:30.946910 | orchestrator | 2026-04-13 02:05:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:05:30.949332 | orchestrator | 2026-04-13 02:05:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:05:30.949495 | orchestrator | 2026-04-13 02:05:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:05:33.997369 | orchestrator | 2026-04-13 02:05:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:05:33.999731 | orchestrator | 2026-04-13 02:05:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:05:33.999976 | orchestrator | 2026-04-13 02:05:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:05:37.053885 | orchestrator | 2026-04-13 02:05:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:05:37.055339 | orchestrator | 2026-04-13 02:05:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:05:37.055604 | orchestrator | 2026-04-13 02:05:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:05:40.100863 | orchestrator | 2026-04-13 02:05:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:05:40.103871 | orchestrator | 2026-04-13 02:05:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:05:40.104017 | orchestrator | 2026-04-13 02:05:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:05:43.146469 | orchestrator | 2026-04-13 02:05:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:05:43.148519 | orchestrator | 2026-04-13 02:05:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:05:43.148554 | orchestrator | 2026-04-13 02:05:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:05:46.197881 | orchestrator | 2026-04-13 02:05:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:05:46.199516 | orchestrator | 2026-04-13 02:05:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:05:46.199626 | orchestrator | 2026-04-13 02:05:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:05:49.244553 | orchestrator | 2026-04-13 02:05:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:05:49.247576 | orchestrator | 2026-04-13 02:05:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:05:49.247681 | orchestrator | 2026-04-13 02:05:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:05:52.297177 | orchestrator | 2026-04-13 02:05:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:05:52.300202 | orchestrator | 2026-04-13 02:05:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:05:52.300249 | orchestrator | 2026-04-13 02:05:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:05:55.353276 | orchestrator | 2026-04-13 02:05:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:05:55.356031 | orchestrator | 2026-04-13 02:05:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:05:55.356093 | orchestrator | 2026-04-13 02:05:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:05:58.405235 | orchestrator | 2026-04-13 02:05:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:05:58.406300 | orchestrator | 2026-04-13 02:05:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:05:58.406465 | orchestrator | 2026-04-13 02:05:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:06:01.455998 | orchestrator | 2026-04-13 02:06:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:06:01.457470 | orchestrator | 2026-04-13 02:06:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:06:01.457511 | orchestrator | 2026-04-13 02:06:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:06:04.505472 | orchestrator | 2026-04-13 02:06:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:06:04.507045 | orchestrator | 2026-04-13 02:06:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:06:04.507080 | orchestrator | 2026-04-13 02:06:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:06:07.553984 | orchestrator | 2026-04-13 02:06:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:06:07.556014 | orchestrator | 2026-04-13 02:06:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:06:07.556171 | orchestrator | 2026-04-13 02:06:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:06:10.601386 | orchestrator | 2026-04-13 02:06:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:06:10.601482 | orchestrator | 2026-04-13 02:06:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:06:10.601497 | orchestrator | 2026-04-13 02:06:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:06:13.650094 | orchestrator | 2026-04-13 02:06:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:06:13.651777 | orchestrator | 2026-04-13 02:06:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:06:13.652166 | orchestrator | 2026-04-13 02:06:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:06:16.701466 | orchestrator | 2026-04-13 02:06:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:06:16.702135 | orchestrator | 2026-04-13 02:06:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:06:16.702525 | orchestrator | 2026-04-13 02:06:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:06:19.746001 | orchestrator | 2026-04-13 02:06:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:06:19.748579 | orchestrator | 2026-04-13 02:06:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:06:19.748629 | orchestrator | 2026-04-13 02:06:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:06:22.799994 | orchestrator | 2026-04-13 02:06:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:06:22.802133 | orchestrator | 2026-04-13 02:06:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:06:22.802171 | orchestrator | 2026-04-13 02:06:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:06:25.848061 | orchestrator | 2026-04-13 02:06:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:06:25.848512 | orchestrator | 2026-04-13 02:06:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:06:25.848546 | orchestrator | 2026-04-13 02:06:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:06:28.897241 | orchestrator | 2026-04-13 02:06:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:06:28.900286 | orchestrator | 2026-04-13 02:06:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:06:28.900552 | orchestrator | 2026-04-13 02:06:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:06:31.943820 | orchestrator | 2026-04-13 02:06:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:06:31.945213 | orchestrator | 2026-04-13 02:06:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:06:31.945264 | orchestrator | 2026-04-13 02:06:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:06:34.993603 | orchestrator | 2026-04-13 02:06:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:06:34.996099 | orchestrator | 2026-04-13 02:06:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:06:34.996127 | orchestrator | 2026-04-13 02:06:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:06:38.051273 | orchestrator | 2026-04-13 02:06:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:06:38.052877 | orchestrator | 2026-04-13 02:06:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:06:38.052903 | orchestrator | 2026-04-13 02:06:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:06:41.111429 | orchestrator | 2026-04-13 02:06:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:06:41.113507 | orchestrator | 2026-04-13 02:06:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:06:41.114410 | orchestrator | 2026-04-13 02:06:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:06:44.167335 | orchestrator | 2026-04-13 02:06:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:06:44.170062 | orchestrator | 2026-04-13 02:06:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:06:44.170106 | orchestrator | 2026-04-13 02:06:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:06:47.217056 | orchestrator | 2026-04-13 02:06:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:06:47.217881 | orchestrator | 2026-04-13 02:06:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:06:47.217896 | orchestrator | 2026-04-13 02:06:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:06:50.261444 | orchestrator | 2026-04-13 02:06:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:06:50.262795 | orchestrator | 2026-04-13 02:06:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:06:50.262836 | orchestrator | 2026-04-13 02:06:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:06:53.312236 | orchestrator | 2026-04-13 02:06:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:06:53.312871 | orchestrator | 2026-04-13 02:06:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:06:53.312922 | orchestrator | 2026-04-13 02:06:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:06:56.367258 | orchestrator | 2026-04-13 02:06:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:06:56.369954 | orchestrator | 2026-04-13 02:06:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:06:56.370053 | orchestrator | 2026-04-13 02:06:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:06:59.421772 | orchestrator | 2026-04-13 02:06:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:06:59.424619 | orchestrator | 2026-04-13 02:06:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:06:59.424672 | orchestrator | 2026-04-13 02:06:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:07:02.472183 | orchestrator | 2026-04-13 02:07:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:07:02.473452 | orchestrator | 2026-04-13 02:07:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:07:02.473492 | orchestrator | 2026-04-13 02:07:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:07:05.528971 | orchestrator | 2026-04-13 02:07:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:07:05.533042 | orchestrator | 2026-04-13 02:07:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:07:05.533214 | orchestrator | 2026-04-13 02:07:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:07:08.584313 | orchestrator | 2026-04-13 02:07:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:07:08.587153 | orchestrator | 2026-04-13 02:07:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:07:08.587191 | orchestrator | 2026-04-13 02:07:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:07:11.642339 | orchestrator | 2026-04-13 02:07:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:07:11.643947 | orchestrator | 2026-04-13 02:07:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:07:11.644079 | orchestrator | 2026-04-13 02:07:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:07:14.693902 | orchestrator | 2026-04-13 02:07:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:07:14.695170 | orchestrator | 2026-04-13 02:07:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:07:14.695209 | orchestrator | 2026-04-13 02:07:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:07:17.744609 | orchestrator | 2026-04-13 02:07:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:07:17.746286 | orchestrator | 2026-04-13 02:07:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:07:17.746322 | orchestrator | 2026-04-13 02:07:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:07:20.792275 | orchestrator | 2026-04-13 02:07:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:07:20.794085 | orchestrator | 2026-04-13 02:07:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:07:20.794156 | orchestrator | 2026-04-13 02:07:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:07:23.839583 | orchestrator | 2026-04-13 02:07:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:07:23.840920 | orchestrator | 2026-04-13 02:07:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:07:23.840966 | orchestrator | 2026-04-13 02:07:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:07:26.894196 | orchestrator | 2026-04-13 02:07:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:07:26.896019 | orchestrator | 2026-04-13 02:07:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:07:26.896190 | orchestrator | 2026-04-13 02:07:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:07:29.944419 | orchestrator | 2026-04-13 02:07:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:07:29.945980 | orchestrator | 2026-04-13 02:07:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:07:29.946199 | orchestrator | 2026-04-13 02:07:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:07:32.996222 | orchestrator | 2026-04-13 02:07:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:07:32.999047 | orchestrator | 2026-04-13 02:07:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:07:32.999122 | orchestrator | 2026-04-13 02:07:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:07:36.050692 | orchestrator | 2026-04-13 02:07:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:07:36.051483 | orchestrator | 2026-04-13 02:07:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:07:36.051509 | orchestrator | 2026-04-13 02:07:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:07:39.098201 | orchestrator | 2026-04-13 02:07:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:07:39.100188 | orchestrator | 2026-04-13 02:07:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:07:39.100231 | orchestrator | 2026-04-13 02:07:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:07:42.145576 | orchestrator | 2026-04-13 02:07:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:07:42.147649 | orchestrator | 2026-04-13 02:07:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:07:42.147698 | orchestrator | 2026-04-13 02:07:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:07:45.197324 | orchestrator | 2026-04-13 02:07:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:07:45.199548 | orchestrator | 2026-04-13 02:07:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:07:45.199678 | orchestrator | 2026-04-13 02:07:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:07:48.234652 | orchestrator | 2026-04-13 02:07:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:07:48.234985 | orchestrator | 2026-04-13 02:07:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:07:48.235022 | orchestrator | 2026-04-13 02:07:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:07:51.286333 | orchestrator | 2026-04-13 02:07:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:07:51.288483 | orchestrator | 2026-04-13 02:07:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:07:51.288559 | orchestrator | 2026-04-13 02:07:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:07:54.333070 | orchestrator | 2026-04-13 02:07:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:07:54.334357 | orchestrator | 2026-04-13 02:07:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:07:54.334454 | orchestrator | 2026-04-13 02:07:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:07:57.382461 | orchestrator | 2026-04-13 02:07:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:07:57.384337 | orchestrator | 2026-04-13 02:07:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:07:57.384377 | orchestrator | 2026-04-13 02:07:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:08:00.431260 | orchestrator | 2026-04-13 02:08:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:08:00.432606 | orchestrator | 2026-04-13 02:08:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:08:00.432652 | orchestrator | 2026-04-13 02:08:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:08:03.484233 | orchestrator | 2026-04-13 02:08:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:08:03.486512 | orchestrator | 2026-04-13 02:08:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:08:03.486572 | orchestrator | 2026-04-13 02:08:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:08:06.543949 | orchestrator | 2026-04-13 02:08:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:08:06.545235 | orchestrator | 2026-04-13 02:08:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:08:06.545275 | orchestrator | 2026-04-13 02:08:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:08:09.582999 | orchestrator | 2026-04-13 02:08:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:08:09.584280 | orchestrator | 2026-04-13 02:08:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:08:09.584354 | orchestrator | 2026-04-13 02:08:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:08:12.636390 | orchestrator | 2026-04-13 02:08:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:08:12.637797 | orchestrator | 2026-04-13 02:08:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:08:12.638085 | orchestrator | 2026-04-13 02:08:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:08:15.689935 | orchestrator | 2026-04-13 02:08:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:08:15.691372 | orchestrator | 2026-04-13 02:08:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:08:15.691410 | orchestrator | 2026-04-13 02:08:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:08:18.728927 | orchestrator | 2026-04-13 02:08:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:08:18.730775 | orchestrator | 2026-04-13 02:08:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:08:18.730813 | orchestrator | 2026-04-13 02:08:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:08:21.780526 | orchestrator | 2026-04-13 02:08:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:08:21.782842 | orchestrator | 2026-04-13 02:08:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:08:21.782910 | orchestrator | 2026-04-13 02:08:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:08:24.825573 | orchestrator | 2026-04-13 02:08:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:08:24.826817 | orchestrator | 2026-04-13 02:08:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:08:24.826866 | orchestrator | 2026-04-13 02:08:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:08:27.877594 | orchestrator | 2026-04-13 02:08:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:08:27.879577 | orchestrator | 2026-04-13 02:08:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:08:27.879615 | orchestrator | 2026-04-13 02:08:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:08:30.932105 | orchestrator | 2026-04-13 02:08:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:08:30.934072 | orchestrator | 2026-04-13 02:08:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:08:30.934100 | orchestrator | 2026-04-13 02:08:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:08:33.980947 | orchestrator | 2026-04-13 02:08:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:08:33.982906 | orchestrator | 2026-04-13 02:08:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:08:33.982961 | orchestrator | 2026-04-13 02:08:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:08:37.034956 | orchestrator | 2026-04-13 02:08:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:08:37.037137 | orchestrator | 2026-04-13 02:08:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:08:37.037170 | orchestrator | 2026-04-13 02:08:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:08:40.092087 | orchestrator | 2026-04-13 02:08:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:08:40.094314 | orchestrator | 2026-04-13 02:08:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:08:40.094355 | orchestrator | 2026-04-13 02:08:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:08:43.152140 | orchestrator | 2026-04-13 02:08:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:08:43.154877 | orchestrator | 2026-04-13 02:08:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:08:43.154921 | orchestrator | 2026-04-13 02:08:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:08:46.200951 | orchestrator | 2026-04-13 02:08:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:08:46.202359 | orchestrator | 2026-04-13 02:08:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:08:46.202400 | orchestrator | 2026-04-13 02:08:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:08:49.243243 | orchestrator | 2026-04-13 02:08:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:08:49.245159 | orchestrator | 2026-04-13 02:08:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:08:49.245207 | orchestrator | 2026-04-13 02:08:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:08:52.288830 | orchestrator | 2026-04-13 02:08:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:08:52.289892 | orchestrator | 2026-04-13 02:08:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:08:52.289950 | orchestrator | 2026-04-13 02:08:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:08:55.323512 | orchestrator | 2026-04-13 02:08:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:08:55.325960 | orchestrator | 2026-04-13 02:08:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:08:55.326092 | orchestrator | 2026-04-13 02:08:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:08:58.366532 | orchestrator | 2026-04-13 02:08:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:08:58.368321 | orchestrator | 2026-04-13 02:08:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:08:58.368376 | orchestrator | 2026-04-13 02:08:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:09:01.421378 | orchestrator | 2026-04-13 02:09:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:09:01.424147 | orchestrator | 2026-04-13 02:09:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:09:01.424453 | orchestrator | 2026-04-13 02:09:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:09:04.473305 | orchestrator | 2026-04-13 02:09:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:09:04.475284 | orchestrator | 2026-04-13 02:09:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:09:04.475365 | orchestrator | 2026-04-13 02:09:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:09:07.527371 | orchestrator | 2026-04-13 02:09:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:09:07.532656 | orchestrator | 2026-04-13 02:09:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:09:07.532756 | orchestrator | 2026-04-13 02:09:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:09:10.582094 | orchestrator | 2026-04-13 02:09:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:09:10.585584 | orchestrator | 2026-04-13 02:09:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:09:10.585692 | orchestrator | 2026-04-13 02:09:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:09:13.638241 | orchestrator | 2026-04-13 02:09:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:09:13.638316 | orchestrator | 2026-04-13 02:09:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:09:13.638324 | orchestrator | 2026-04-13 02:09:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:09:16.685333 | orchestrator | 2026-04-13 02:09:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:09:16.686913 | orchestrator | 2026-04-13 02:09:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:09:16.686953 | orchestrator | 2026-04-13 02:09:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:09:19.732435 | orchestrator | 2026-04-13 02:09:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:09:19.733000 | orchestrator | 2026-04-13 02:09:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:09:19.733082 | orchestrator | 2026-04-13 02:09:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:09:22.790534 | orchestrator | 2026-04-13 02:09:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:09:22.792502 | orchestrator | 2026-04-13 02:09:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:09:22.792541 | orchestrator | 2026-04-13 02:09:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:09:25.851271 | orchestrator | 2026-04-13 02:09:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:09:25.854186 | orchestrator | 2026-04-13 02:09:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:09:25.854234 | orchestrator | 2026-04-13 02:09:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:09:28.906556 | orchestrator | 2026-04-13 02:09:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:09:28.907762 | orchestrator | 2026-04-13 02:09:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:09:28.907802 | orchestrator | 2026-04-13 02:09:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:09:31.962157 | orchestrator | 2026-04-13 02:09:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:09:31.964452 | orchestrator | 2026-04-13 02:09:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:09:31.964479 | orchestrator | 2026-04-13 02:09:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:09:35.017568 | orchestrator | 2026-04-13 02:09:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:09:35.019094 | orchestrator | 2026-04-13 02:09:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:09:35.019155 | orchestrator | 2026-04-13 02:09:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:09:38.070279 | orchestrator | 2026-04-13 02:09:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:09:38.071188 | orchestrator | 2026-04-13 02:09:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:09:38.071215 | orchestrator | 2026-04-13 02:09:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:09:41.123591 | orchestrator | 2026-04-13 02:09:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:09:41.125201 | orchestrator | 2026-04-13 02:09:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:09:41.125526 | orchestrator | 2026-04-13 02:09:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:09:44.170730 | orchestrator | 2026-04-13 02:09:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:09:44.171807 | orchestrator | 2026-04-13 02:09:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:09:44.171853 | orchestrator | 2026-04-13 02:09:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:09:47.226166 | orchestrator | 2026-04-13 02:09:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:09:47.227930 | orchestrator | 2026-04-13 02:09:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:09:47.227971 | orchestrator | 2026-04-13 02:09:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:09:50.263911 | orchestrator | 2026-04-13 02:09:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:09:50.265686 | orchestrator | 2026-04-13 02:09:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:09:50.265756 | orchestrator | 2026-04-13 02:09:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:09:53.305204 | orchestrator | 2026-04-13 02:09:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:09:53.307020 | orchestrator | 2026-04-13 02:09:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:09:53.307069 | orchestrator | 2026-04-13 02:09:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:09:56.361436 | orchestrator | 2026-04-13 02:09:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:09:56.362514 | orchestrator | 2026-04-13 02:09:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:09:56.362832 | orchestrator | 2026-04-13 02:09:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:09:59.408571 | orchestrator | 2026-04-13 02:09:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:09:59.410506 | orchestrator | 2026-04-13 02:09:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:09:59.410575 | orchestrator | 2026-04-13 02:09:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:10:02.458578 | orchestrator | 2026-04-13 02:10:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:10:02.460497 | orchestrator | 2026-04-13 02:10:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:10:02.461020 | orchestrator | 2026-04-13 02:10:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:10:05.502223 | orchestrator | 2026-04-13 02:10:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:10:05.503972 | orchestrator | 2026-04-13 02:10:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:10:05.504020 | orchestrator | 2026-04-13 02:10:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:10:08.547934 | orchestrator | 2026-04-13 02:10:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:10:08.549162 | orchestrator | 2026-04-13 02:10:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:10:08.549223 | orchestrator | 2026-04-13 02:10:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:10:11.593245 | orchestrator | 2026-04-13 02:10:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:10:11.594763 | orchestrator | 2026-04-13 02:10:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:10:11.594820 | orchestrator | 2026-04-13 02:10:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:10:14.641038 | orchestrator | 2026-04-13 02:10:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:10:14.641874 | orchestrator | 2026-04-13 02:10:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:10:14.641926 | orchestrator | 2026-04-13 02:10:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:10:17.692998 | orchestrator | 2026-04-13 02:10:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:10:17.694921 | orchestrator | 2026-04-13 02:10:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:10:17.694954 | orchestrator | 2026-04-13 02:10:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:10:20.743385 | orchestrator | 2026-04-13 02:10:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:10:20.746123 | orchestrator | 2026-04-13 02:10:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:10:20.746190 | orchestrator | 2026-04-13 02:10:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:10:23.796152 | orchestrator | 2026-04-13 02:10:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:10:23.798783 | orchestrator | 2026-04-13 02:10:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:10:23.798876 | orchestrator | 2026-04-13 02:10:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:10:26.849894 | orchestrator | 2026-04-13 02:10:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:10:26.851107 | orchestrator | 2026-04-13 02:10:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:10:26.851139 | orchestrator | 2026-04-13 02:10:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:10:29.904211 | orchestrator | 2026-04-13 02:10:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:10:29.905554 | orchestrator | 2026-04-13 02:10:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:10:29.905595 | orchestrator | 2026-04-13 02:10:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:10:32.959858 | orchestrator | 2026-04-13 02:10:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:10:32.962106 | orchestrator | 2026-04-13 02:10:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:10:32.962172 | orchestrator | 2026-04-13 02:10:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:10:36.010152 | orchestrator | 2026-04-13 02:10:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:10:36.013390 | orchestrator | 2026-04-13 02:10:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:10:36.013465 | orchestrator | 2026-04-13 02:10:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:10:39.063561 | orchestrator | 2026-04-13 02:10:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:10:39.064885 | orchestrator | 2026-04-13 02:10:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:10:39.064938 | orchestrator | 2026-04-13 02:10:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:10:42.113201 | orchestrator | 2026-04-13 02:10:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:10:42.114678 | orchestrator | 2026-04-13 02:10:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:10:42.114716 | orchestrator | 2026-04-13 02:10:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:10:45.164674 | orchestrator | 2026-04-13 02:10:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:10:45.168723 | orchestrator | 2026-04-13 02:10:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:10:45.168824 | orchestrator | 2026-04-13 02:10:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:10:48.213356 | orchestrator | 2026-04-13 02:10:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:10:48.214711 | orchestrator | 2026-04-13 02:10:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:10:48.214746 | orchestrator | 2026-04-13 02:10:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:10:51.268707 | orchestrator | 2026-04-13 02:10:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:10:51.271460 | orchestrator | 2026-04-13 02:10:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:10:51.271552 | orchestrator | 2026-04-13 02:10:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:10:54.326090 | orchestrator | 2026-04-13 02:10:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:10:54.327018 | orchestrator | 2026-04-13 02:10:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:10:54.327050 | orchestrator | 2026-04-13 02:10:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:10:57.370596 | orchestrator | 2026-04-13 02:10:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:10:57.372420 | orchestrator | 2026-04-13 02:10:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:10:57.372650 | orchestrator | 2026-04-13 02:10:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:11:00.424593 | orchestrator | 2026-04-13 02:11:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:11:00.427431 | orchestrator | 2026-04-13 02:11:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:11:00.427523 | orchestrator | 2026-04-13 02:11:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:11:03.482096 | orchestrator | 2026-04-13 02:11:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:11:03.483205 | orchestrator | 2026-04-13 02:11:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:11:03.483260 | orchestrator | 2026-04-13 02:11:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:11:06.527682 | orchestrator | 2026-04-13 02:11:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:11:06.529406 | orchestrator | 2026-04-13 02:11:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:11:06.529460 | orchestrator | 2026-04-13 02:11:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:11:09.577884 | orchestrator | 2026-04-13 02:11:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:11:09.579320 | orchestrator | 2026-04-13 02:11:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:11:09.579357 | orchestrator | 2026-04-13 02:11:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:11:12.631111 | orchestrator | 2026-04-13 02:11:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:11:12.634174 | orchestrator | 2026-04-13 02:11:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:11:12.634248 | orchestrator | 2026-04-13 02:11:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:11:15.689573 | orchestrator | 2026-04-13 02:11:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:11:15.690986 | orchestrator | 2026-04-13 02:11:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:11:15.691178 | orchestrator | 2026-04-13 02:11:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:11:18.751599 | orchestrator | 2026-04-13 02:11:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:11:18.753392 | orchestrator | 2026-04-13 02:11:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:11:18.753415 | orchestrator | 2026-04-13 02:11:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:11:21.806285 | orchestrator | 2026-04-13 02:11:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:11:21.807924 | orchestrator | 2026-04-13 02:11:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:11:21.807992 | orchestrator | 2026-04-13 02:11:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:11:24.856935 | orchestrator | 2026-04-13 02:11:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:11:24.858367 | orchestrator | 2026-04-13 02:11:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:11:24.858434 | orchestrator | 2026-04-13 02:11:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:11:27.912842 | orchestrator | 2026-04-13 02:11:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:11:27.914368 | orchestrator | 2026-04-13 02:11:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:11:27.914396 | orchestrator | 2026-04-13 02:11:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:11:30.957043 | orchestrator | 2026-04-13 02:11:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:11:30.958200 | orchestrator | 2026-04-13 02:11:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:11:30.958233 | orchestrator | 2026-04-13 02:11:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:11:34.013220 | orchestrator | 2026-04-13 02:11:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:11:34.014968 | orchestrator | 2026-04-13 02:11:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:11:34.015022 | orchestrator | 2026-04-13 02:11:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:11:37.051534 | orchestrator | 2026-04-13 02:11:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:11:37.052680 | orchestrator | 2026-04-13 02:11:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:11:37.052739 | orchestrator | 2026-04-13 02:11:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:11:40.106188 | orchestrator | 2026-04-13 02:11:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:11:40.109740 | orchestrator | 2026-04-13 02:11:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:11:40.109803 | orchestrator | 2026-04-13 02:11:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:11:43.163012 | orchestrator | 2026-04-13 02:11:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:11:43.165068 | orchestrator | 2026-04-13 02:11:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:11:43.165114 | orchestrator | 2026-04-13 02:11:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:11:46.213512 | orchestrator | 2026-04-13 02:11:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:11:46.214406 | orchestrator | 2026-04-13 02:11:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:11:46.214693 | orchestrator | 2026-04-13 02:11:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:11:49.263065 | orchestrator | 2026-04-13 02:11:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:11:49.263687 | orchestrator | 2026-04-13 02:11:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:11:49.263728 | orchestrator | 2026-04-13 02:11:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:11:52.308391 | orchestrator | 2026-04-13 02:11:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:11:52.310258 | orchestrator | 2026-04-13 02:11:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:11:52.310418 | orchestrator | 2026-04-13 02:11:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:11:55.357221 | orchestrator | 2026-04-13 02:11:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:11:55.358385 | orchestrator | 2026-04-13 02:11:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:11:55.358594 | orchestrator | 2026-04-13 02:11:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:11:58.414362 | orchestrator | 2026-04-13 02:11:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:11:58.415536 | orchestrator | 2026-04-13 02:11:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:11:58.415574 | orchestrator | 2026-04-13 02:11:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:12:01.460999 | orchestrator | 2026-04-13 02:12:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:12:01.462456 | orchestrator | 2026-04-13 02:12:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:12:01.462588 | orchestrator | 2026-04-13 02:12:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:12:04.515860 | orchestrator | 2026-04-13 02:12:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:12:04.518591 | orchestrator | 2026-04-13 02:12:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:12:04.518653 | orchestrator | 2026-04-13 02:12:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:12:07.572891 | orchestrator | 2026-04-13 02:12:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:12:07.574755 | orchestrator | 2026-04-13 02:12:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:12:07.574804 | orchestrator | 2026-04-13 02:12:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:12:10.619488 | orchestrator | 2026-04-13 02:12:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:12:10.621834 | orchestrator | 2026-04-13 02:12:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:12:10.621875 | orchestrator | 2026-04-13 02:12:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:12:13.673083 | orchestrator | 2026-04-13 02:12:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:12:13.674365 | orchestrator | 2026-04-13 02:12:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:12:13.674419 | orchestrator | 2026-04-13 02:12:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:12:16.726474 | orchestrator | 2026-04-13 02:12:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:12:16.729020 | orchestrator | 2026-04-13 02:12:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:12:16.729065 | orchestrator | 2026-04-13 02:12:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:12:19.776098 | orchestrator | 2026-04-13 02:12:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:12:19.778099 | orchestrator | 2026-04-13 02:12:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:12:19.778146 | orchestrator | 2026-04-13 02:12:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:12:22.831922 | orchestrator | 2026-04-13 02:12:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:12:22.834167 | orchestrator | 2026-04-13 02:12:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:12:22.834278 | orchestrator | 2026-04-13 02:12:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:12:25.875054 | orchestrator | 2026-04-13 02:12:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:12:25.876903 | orchestrator | 2026-04-13 02:12:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:12:25.876972 | orchestrator | 2026-04-13 02:12:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:12:28.928832 | orchestrator | 2026-04-13 02:12:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:12:28.930006 | orchestrator | 2026-04-13 02:12:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:12:28.930097 | orchestrator | 2026-04-13 02:12:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:12:31.981753 | orchestrator | 2026-04-13 02:12:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:12:31.984954 | orchestrator | 2026-04-13 02:12:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:12:31.985022 | orchestrator | 2026-04-13 02:12:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:12:35.038459 | orchestrator | 2026-04-13 02:12:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:12:35.040646 | orchestrator | 2026-04-13 02:12:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:12:35.040721 | orchestrator | 2026-04-13 02:12:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:12:38.097511 | orchestrator | 2026-04-13 02:12:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:12:38.098781 | orchestrator | 2026-04-13 02:12:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:12:38.098817 | orchestrator | 2026-04-13 02:12:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:12:41.154443 | orchestrator | 2026-04-13 02:12:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:12:41.157520 | orchestrator | 2026-04-13 02:12:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:12:41.157857 | orchestrator | 2026-04-13 02:12:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:12:44.216861 | orchestrator | 2026-04-13 02:12:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:12:44.218496 | orchestrator | 2026-04-13 02:12:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:12:44.218536 | orchestrator | 2026-04-13 02:12:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:12:47.277013 | orchestrator | 2026-04-13 02:12:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:12:47.278365 | orchestrator | 2026-04-13 02:12:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:12:47.278396 | orchestrator | 2026-04-13 02:12:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:12:50.320874 | orchestrator | 2026-04-13 02:12:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:12:50.322578 | orchestrator | 2026-04-13 02:12:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:12:50.322621 | orchestrator | 2026-04-13 02:12:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:12:53.370960 | orchestrator | 2026-04-13 02:12:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:12:53.373405 | orchestrator | 2026-04-13 02:12:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:12:53.373466 | orchestrator | 2026-04-13 02:12:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:12:56.418267 | orchestrator | 2026-04-13 02:12:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:12:56.418819 | orchestrator | 2026-04-13 02:12:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:12:56.418858 | orchestrator | 2026-04-13 02:12:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:12:59.466826 | orchestrator | 2026-04-13 02:12:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:12:59.468287 | orchestrator | 2026-04-13 02:12:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:12:59.468680 | orchestrator | 2026-04-13 02:12:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:13:02.519885 | orchestrator | 2026-04-13 02:13:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:13:02.521599 | orchestrator | 2026-04-13 02:13:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:13:02.521691 | orchestrator | 2026-04-13 02:13:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:13:05.572292 | orchestrator | 2026-04-13 02:13:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:13:05.575040 | orchestrator | 2026-04-13 02:13:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:13:05.575084 | orchestrator | 2026-04-13 02:13:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:13:08.627718 | orchestrator | 2026-04-13 02:13:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:13:08.629272 | orchestrator | 2026-04-13 02:13:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:13:08.629313 | orchestrator | 2026-04-13 02:13:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:13:11.676067 | orchestrator | 2026-04-13 02:13:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:13:11.677813 | orchestrator | 2026-04-13 02:13:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:13:11.677844 | orchestrator | 2026-04-13 02:13:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:13:14.719469 | orchestrator | 2026-04-13 02:13:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:13:14.721520 | orchestrator | 2026-04-13 02:13:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:13:14.721669 | orchestrator | 2026-04-13 02:13:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:13:17.769813 | orchestrator | 2026-04-13 02:13:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:13:17.772153 | orchestrator | 2026-04-13 02:13:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:13:17.772559 | orchestrator | 2026-04-13 02:13:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:13:20.818009 | orchestrator | 2026-04-13 02:13:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:13:20.819416 | orchestrator | 2026-04-13 02:13:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:13:20.819446 | orchestrator | 2026-04-13 02:13:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:13:23.872391 | orchestrator | 2026-04-13 02:13:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:13:23.873801 | orchestrator | 2026-04-13 02:13:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:13:23.874215 | orchestrator | 2026-04-13 02:13:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:13:26.923695 | orchestrator | 2026-04-13 02:13:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:13:26.925026 | orchestrator | 2026-04-13 02:13:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:13:26.925063 | orchestrator | 2026-04-13 02:13:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:13:29.977050 | orchestrator | 2026-04-13 02:13:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:13:29.979266 | orchestrator | 2026-04-13 02:13:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:13:29.979299 | orchestrator | 2026-04-13 02:13:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:13:33.026855 | orchestrator | 2026-04-13 02:13:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:13:33.028180 | orchestrator | 2026-04-13 02:13:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:13:33.028255 | orchestrator | 2026-04-13 02:13:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:13:36.073161 | orchestrator | 2026-04-13 02:13:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:13:36.074054 | orchestrator | 2026-04-13 02:13:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:13:36.074072 | orchestrator | 2026-04-13 02:13:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:13:39.125845 | orchestrator | 2026-04-13 02:13:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:13:39.128290 | orchestrator | 2026-04-13 02:13:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:13:39.128347 | orchestrator | 2026-04-13 02:13:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:13:42.172092 | orchestrator | 2026-04-13 02:13:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:13:42.174334 | orchestrator | 2026-04-13 02:13:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:13:42.174383 | orchestrator | 2026-04-13 02:13:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:13:45.223415 | orchestrator | 2026-04-13 02:13:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:13:45.223863 | orchestrator | 2026-04-13 02:13:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:13:45.223897 | orchestrator | 2026-04-13 02:13:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:13:48.265813 | orchestrator | 2026-04-13 02:13:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:13:48.267784 | orchestrator | 2026-04-13 02:13:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:13:48.267861 | orchestrator | 2026-04-13 02:13:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:13:51.314892 | orchestrator | 2026-04-13 02:13:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:13:51.316260 | orchestrator | 2026-04-13 02:13:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:13:51.316421 | orchestrator | 2026-04-13 02:13:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:13:54.359881 | orchestrator | 2026-04-13 02:13:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:13:54.362126 | orchestrator | 2026-04-13 02:13:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:13:54.362305 | orchestrator | 2026-04-13 02:13:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:13:57.411920 | orchestrator | 2026-04-13 02:13:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:13:57.413553 | orchestrator | 2026-04-13 02:13:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:13:57.413631 | orchestrator | 2026-04-13 02:13:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:14:00.466200 | orchestrator | 2026-04-13 02:14:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:14:00.471349 | orchestrator | 2026-04-13 02:14:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:14:00.472239 | orchestrator | 2026-04-13 02:14:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:14:03.514987 | orchestrator | 2026-04-13 02:14:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:14:03.517168 | orchestrator | 2026-04-13 02:14:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:14:03.517246 | orchestrator | 2026-04-13 02:14:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:14:06.558316 | orchestrator | 2026-04-13 02:14:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:14:06.561126 | orchestrator | 2026-04-13 02:14:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:14:06.561191 | orchestrator | 2026-04-13 02:14:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:14:09.606511 | orchestrator | 2026-04-13 02:14:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:14:09.609832 | orchestrator | 2026-04-13 02:14:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:14:09.609918 | orchestrator | 2026-04-13 02:14:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:14:12.664018 | orchestrator | 2026-04-13 02:14:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:14:12.665641 | orchestrator | 2026-04-13 02:14:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:14:12.665739 | orchestrator | 2026-04-13 02:14:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:14:15.714956 | orchestrator | 2026-04-13 02:14:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:14:15.715766 | orchestrator | 2026-04-13 02:14:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:14:15.715893 | orchestrator | 2026-04-13 02:14:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:14:18.760885 | orchestrator | 2026-04-13 02:14:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:14:18.763021 | orchestrator | 2026-04-13 02:14:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:14:18.763187 | orchestrator | 2026-04-13 02:14:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:14:21.815724 | orchestrator | 2026-04-13 02:14:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:14:21.817484 | orchestrator | 2026-04-13 02:14:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:14:21.817546 | orchestrator | 2026-04-13 02:14:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:14:24.867360 | orchestrator | 2026-04-13 02:14:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:14:24.868974 | orchestrator | 2026-04-13 02:14:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:14:24.869020 | orchestrator | 2026-04-13 02:14:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:14:27.927058 | orchestrator | 2026-04-13 02:14:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:14:27.929101 | orchestrator | 2026-04-13 02:14:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:14:27.929152 | orchestrator | 2026-04-13 02:14:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:14:30.975338 | orchestrator | 2026-04-13 02:14:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:14:30.978137 | orchestrator | 2026-04-13 02:14:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:14:30.978217 | orchestrator | 2026-04-13 02:14:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:14:34.027150 | orchestrator | 2026-04-13 02:14:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:14:34.028785 | orchestrator | 2026-04-13 02:14:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:14:34.028851 | orchestrator | 2026-04-13 02:14:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:14:37.087901 | orchestrator | 2026-04-13 02:14:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:14:37.089039 | orchestrator | 2026-04-13 02:14:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:14:37.089084 | orchestrator | 2026-04-13 02:14:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:14:40.142797 | orchestrator | 2026-04-13 02:14:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:14:40.145036 | orchestrator | 2026-04-13 02:14:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:14:40.145306 | orchestrator | 2026-04-13 02:14:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:14:43.201004 | orchestrator | 2026-04-13 02:14:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:14:43.203615 | orchestrator | 2026-04-13 02:14:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:14:43.204109 | orchestrator | 2026-04-13 02:14:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:14:46.267293 | orchestrator | 2026-04-13 02:14:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:14:46.269426 | orchestrator | 2026-04-13 02:14:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:14:46.269466 | orchestrator | 2026-04-13 02:14:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:14:49.322472 | orchestrator | 2026-04-13 02:14:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:14:49.324625 | orchestrator | 2026-04-13 02:14:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:14:49.324750 | orchestrator | 2026-04-13 02:14:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:14:52.376050 | orchestrator | 2026-04-13 02:14:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:14:52.377583 | orchestrator | 2026-04-13 02:14:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:14:52.377667 | orchestrator | 2026-04-13 02:14:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:14:55.424675 | orchestrator | 2026-04-13 02:14:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:14:55.425221 | orchestrator | 2026-04-13 02:14:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:14:55.427663 | orchestrator | 2026-04-13 02:14:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:14:58.470799 | orchestrator | 2026-04-13 02:14:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:14:58.473585 | orchestrator | 2026-04-13 02:14:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:14:58.473655 | orchestrator | 2026-04-13 02:14:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:15:01.525838 | orchestrator | 2026-04-13 02:15:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:15:01.527371 | orchestrator | 2026-04-13 02:15:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:15:01.527493 | orchestrator | 2026-04-13 02:15:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:15:04.578111 | orchestrator | 2026-04-13 02:15:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:15:04.580429 | orchestrator | 2026-04-13 02:15:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:15:04.580519 | orchestrator | 2026-04-13 02:15:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:15:07.624999 | orchestrator | 2026-04-13 02:15:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:15:07.627499 | orchestrator | 2026-04-13 02:15:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:15:07.627770 | orchestrator | 2026-04-13 02:15:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:15:10.673007 | orchestrator | 2026-04-13 02:15:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:15:10.674912 | orchestrator | 2026-04-13 02:15:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:15:10.675005 | orchestrator | 2026-04-13 02:15:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:15:13.723049 | orchestrator | 2026-04-13 02:15:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:15:13.724240 | orchestrator | 2026-04-13 02:15:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:15:13.724347 | orchestrator | 2026-04-13 02:15:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:15:16.776172 | orchestrator | 2026-04-13 02:15:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:15:16.777942 | orchestrator | 2026-04-13 02:15:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:15:16.778290 | orchestrator | 2026-04-13 02:15:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:15:19.824674 | orchestrator | 2026-04-13 02:15:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:15:19.825470 | orchestrator | 2026-04-13 02:15:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:15:19.825561 | orchestrator | 2026-04-13 02:15:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:15:22.871640 | orchestrator | 2026-04-13 02:15:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:15:22.873305 | orchestrator | 2026-04-13 02:15:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:15:22.873341 | orchestrator | 2026-04-13 02:15:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:15:25.925005 | orchestrator | 2026-04-13 02:15:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:15:25.926905 | orchestrator | 2026-04-13 02:15:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:15:25.926989 | orchestrator | 2026-04-13 02:15:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:15:28.989641 | orchestrator | 2026-04-13 02:15:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:15:28.992486 | orchestrator | 2026-04-13 02:15:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:15:28.992615 | orchestrator | 2026-04-13 02:15:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:15:32.049681 | orchestrator | 2026-04-13 02:15:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:15:32.051833 | orchestrator | 2026-04-13 02:15:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:15:32.051903 | orchestrator | 2026-04-13 02:15:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:15:35.102346 | orchestrator | 2026-04-13 02:15:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:15:35.104917 | orchestrator | 2026-04-13 02:15:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:15:35.104994 | orchestrator | 2026-04-13 02:15:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:15:38.150680 | orchestrator | 2026-04-13 02:15:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:15:38.153018 | orchestrator | 2026-04-13 02:15:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:15:38.153223 | orchestrator | 2026-04-13 02:15:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:15:41.195640 | orchestrator | 2026-04-13 02:15:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:15:41.197496 | orchestrator | 2026-04-13 02:15:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:15:41.197673 | orchestrator | 2026-04-13 02:15:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:15:44.249772 | orchestrator | 2026-04-13 02:15:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:15:44.252112 | orchestrator | 2026-04-13 02:15:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:15:44.252150 | orchestrator | 2026-04-13 02:15:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:15:47.302282 | orchestrator | 2026-04-13 02:15:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:15:47.302826 | orchestrator | 2026-04-13 02:15:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:15:47.302863 | orchestrator | 2026-04-13 02:15:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:15:50.353014 | orchestrator | 2026-04-13 02:15:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:15:50.354242 | orchestrator | 2026-04-13 02:15:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:15:50.354345 | orchestrator | 2026-04-13 02:15:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:15:53.402443 | orchestrator | 2026-04-13 02:15:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:15:53.403029 | orchestrator | 2026-04-13 02:15:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:15:53.403068 | orchestrator | 2026-04-13 02:15:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:15:56.457269 | orchestrator | 2026-04-13 02:15:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:15:56.458812 | orchestrator | 2026-04-13 02:15:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:15:56.458831 | orchestrator | 2026-04-13 02:15:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:15:59.508823 | orchestrator | 2026-04-13 02:15:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:15:59.511195 | orchestrator | 2026-04-13 02:15:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:15:59.511363 | orchestrator | 2026-04-13 02:15:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:16:02.560250 | orchestrator | 2026-04-13 02:16:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:16:02.562075 | orchestrator | 2026-04-13 02:16:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:16:02.562156 | orchestrator | 2026-04-13 02:16:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:16:05.611552 | orchestrator | 2026-04-13 02:16:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:16:05.613614 | orchestrator | 2026-04-13 02:16:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:16:05.613638 | orchestrator | 2026-04-13 02:16:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:16:08.664793 | orchestrator | 2026-04-13 02:16:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:16:08.665942 | orchestrator | 2026-04-13 02:16:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:16:08.666003 | orchestrator | 2026-04-13 02:16:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:16:11.715842 | orchestrator | 2026-04-13 02:16:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:16:11.717979 | orchestrator | 2026-04-13 02:16:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:16:11.718088 | orchestrator | 2026-04-13 02:16:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:16:14.769862 | orchestrator | 2026-04-13 02:16:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:16:14.770899 | orchestrator | 2026-04-13 02:16:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:16:14.770941 | orchestrator | 2026-04-13 02:16:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:16:17.819836 | orchestrator | 2026-04-13 02:16:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:16:17.821473 | orchestrator | 2026-04-13 02:16:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:16:17.821634 | orchestrator | 2026-04-13 02:16:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:16:20.870013 | orchestrator | 2026-04-13 02:16:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:16:20.870882 | orchestrator | 2026-04-13 02:16:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:16:20.870933 | orchestrator | 2026-04-13 02:16:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:16:23.921631 | orchestrator | 2026-04-13 02:16:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:16:23.923258 | orchestrator | 2026-04-13 02:16:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:16:23.923293 | orchestrator | 2026-04-13 02:16:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:16:26.977821 | orchestrator | 2026-04-13 02:16:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:16:26.979597 | orchestrator | 2026-04-13 02:16:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:16:26.979625 | orchestrator | 2026-04-13 02:16:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:16:30.023971 | orchestrator | 2026-04-13 02:16:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:16:30.024710 | orchestrator | 2026-04-13 02:16:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:16:30.024741 | orchestrator | 2026-04-13 02:16:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:16:33.069546 | orchestrator | 2026-04-13 02:16:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:16:33.071588 | orchestrator | 2026-04-13 02:16:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:16:33.071656 | orchestrator | 2026-04-13 02:16:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:16:36.127607 | orchestrator | 2026-04-13 02:16:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:16:36.130245 | orchestrator | 2026-04-13 02:16:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:16:36.130297 | orchestrator | 2026-04-13 02:16:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:16:39.181744 | orchestrator | 2026-04-13 02:16:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:16:39.183949 | orchestrator | 2026-04-13 02:16:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:16:39.184037 | orchestrator | 2026-04-13 02:16:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:16:42.227977 | orchestrator | 2026-04-13 02:16:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:16:42.228704 | orchestrator | 2026-04-13 02:16:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:16:42.228744 | orchestrator | 2026-04-13 02:16:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:16:45.275806 | orchestrator | 2026-04-13 02:16:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:16:45.277976 | orchestrator | 2026-04-13 02:16:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:16:45.278013 | orchestrator | 2026-04-13 02:16:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:16:48.324933 | orchestrator | 2026-04-13 02:16:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:16:48.326604 | orchestrator | 2026-04-13 02:16:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:16:48.326726 | orchestrator | 2026-04-13 02:16:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:16:51.373982 | orchestrator | 2026-04-13 02:16:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:16:51.375919 | orchestrator | 2026-04-13 02:16:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:16:51.375952 | orchestrator | 2026-04-13 02:16:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:16:54.428195 | orchestrator | 2026-04-13 02:16:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:16:54.430458 | orchestrator | 2026-04-13 02:16:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:16:54.430561 | orchestrator | 2026-04-13 02:16:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:16:57.473867 | orchestrator | 2026-04-13 02:16:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:16:57.474991 | orchestrator | 2026-04-13 02:16:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:16:57.475231 | orchestrator | 2026-04-13 02:16:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:17:00.522268 | orchestrator | 2026-04-13 02:17:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:17:00.524250 | orchestrator | 2026-04-13 02:17:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:17:00.524330 | orchestrator | 2026-04-13 02:17:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:17:03.570915 | orchestrator | 2026-04-13 02:17:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:17:03.572851 | orchestrator | 2026-04-13 02:17:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:17:03.572908 | orchestrator | 2026-04-13 02:17:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:17:06.618461 | orchestrator | 2026-04-13 02:17:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:17:06.621715 | orchestrator | 2026-04-13 02:17:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:17:06.621822 | orchestrator | 2026-04-13 02:17:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:17:09.665926 | orchestrator | 2026-04-13 02:17:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:17:09.666147 | orchestrator | 2026-04-13 02:17:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:17:09.666177 | orchestrator | 2026-04-13 02:17:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:17:12.714237 | orchestrator | 2026-04-13 02:17:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:17:12.715730 | orchestrator | 2026-04-13 02:17:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:17:12.715783 | orchestrator | 2026-04-13 02:17:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:17:15.762535 | orchestrator | 2026-04-13 02:17:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:17:15.763610 | orchestrator | 2026-04-13 02:17:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:17:15.763641 | orchestrator | 2026-04-13 02:17:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:17:18.812305 | orchestrator | 2026-04-13 02:17:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:17:18.814338 | orchestrator | 2026-04-13 02:17:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:17:18.814424 | orchestrator | 2026-04-13 02:17:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:17:21.855043 | orchestrator | 2026-04-13 02:17:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:17:21.856341 | orchestrator | 2026-04-13 02:17:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:17:21.856397 | orchestrator | 2026-04-13 02:17:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:17:24.904540 | orchestrator | 2026-04-13 02:17:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:17:24.905698 | orchestrator | 2026-04-13 02:17:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:17:24.905796 | orchestrator | 2026-04-13 02:17:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:17:27.959801 | orchestrator | 2026-04-13 02:17:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:17:27.961773 | orchestrator | 2026-04-13 02:17:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:17:27.961910 | orchestrator | 2026-04-13 02:17:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:17:31.017969 | orchestrator | 2026-04-13 02:17:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:17:31.019042 | orchestrator | 2026-04-13 02:17:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:17:31.020015 | orchestrator | 2026-04-13 02:17:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:17:34.066235 | orchestrator | 2026-04-13 02:17:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:17:34.067462 | orchestrator | 2026-04-13 02:17:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:17:34.067538 | orchestrator | 2026-04-13 02:17:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:17:37.119292 | orchestrator | 2026-04-13 02:17:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:17:37.121936 | orchestrator | 2026-04-13 02:17:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:17:37.121981 | orchestrator | 2026-04-13 02:17:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:17:40.169670 | orchestrator | 2026-04-13 02:17:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:17:40.171718 | orchestrator | 2026-04-13 02:17:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:17:40.171814 | orchestrator | 2026-04-13 02:17:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:17:43.219119 | orchestrator | 2026-04-13 02:17:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:17:43.221135 | orchestrator | 2026-04-13 02:17:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:17:43.221209 | orchestrator | 2026-04-13 02:17:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:17:46.274373 | orchestrator | 2026-04-13 02:17:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:17:46.277410 | orchestrator | 2026-04-13 02:17:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:17:46.277528 | orchestrator | 2026-04-13 02:17:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:17:49.324398 | orchestrator | 2026-04-13 02:17:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:17:49.326194 | orchestrator | 2026-04-13 02:17:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:17:49.326250 | orchestrator | 2026-04-13 02:17:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:17:52.389185 | orchestrator | 2026-04-13 02:17:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:17:52.391970 | orchestrator | 2026-04-13 02:17:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:17:52.392361 | orchestrator | 2026-04-13 02:17:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:17:55.443829 | orchestrator | 2026-04-13 02:17:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:17:55.444902 | orchestrator | 2026-04-13 02:17:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:17:55.444948 | orchestrator | 2026-04-13 02:17:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:17:58.489899 | orchestrator | 2026-04-13 02:17:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:17:58.491348 | orchestrator | 2026-04-13 02:17:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:17:58.491389 | orchestrator | 2026-04-13 02:17:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:18:01.538184 | orchestrator | 2026-04-13 02:18:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:18:01.538957 | orchestrator | 2026-04-13 02:18:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:18:01.538992 | orchestrator | 2026-04-13 02:18:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:18:04.587154 | orchestrator | 2026-04-13 02:18:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:18:04.588339 | orchestrator | 2026-04-13 02:18:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:18:04.588601 | orchestrator | 2026-04-13 02:18:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:18:07.641609 | orchestrator | 2026-04-13 02:18:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:18:07.643757 | orchestrator | 2026-04-13 02:18:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:18:07.643793 | orchestrator | 2026-04-13 02:18:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:18:10.695915 | orchestrator | 2026-04-13 02:18:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:18:10.697641 | orchestrator | 2026-04-13 02:18:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:18:10.697740 | orchestrator | 2026-04-13 02:18:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:18:13.751935 | orchestrator | 2026-04-13 02:18:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:18:13.753443 | orchestrator | 2026-04-13 02:18:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:18:13.753513 | orchestrator | 2026-04-13 02:18:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:18:16.808241 | orchestrator | 2026-04-13 02:18:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:18:16.809841 | orchestrator | 2026-04-13 02:18:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:18:16.810099 | orchestrator | 2026-04-13 02:18:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:18:19.860124 | orchestrator | 2026-04-13 02:18:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:18:19.862838 | orchestrator | 2026-04-13 02:18:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:18:19.862974 | orchestrator | 2026-04-13 02:18:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:18:22.911553 | orchestrator | 2026-04-13 02:18:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:18:22.912930 | orchestrator | 2026-04-13 02:18:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:18:22.912992 | orchestrator | 2026-04-13 02:18:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:18:25.965669 | orchestrator | 2026-04-13 02:18:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:18:25.966223 | orchestrator | 2026-04-13 02:18:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:18:25.966761 | orchestrator | 2026-04-13 02:18:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:18:29.012056 | orchestrator | 2026-04-13 02:18:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:18:29.013112 | orchestrator | 2026-04-13 02:18:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:18:29.013150 | orchestrator | 2026-04-13 02:18:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:18:32.060789 | orchestrator | 2026-04-13 02:18:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:18:32.062166 | orchestrator | 2026-04-13 02:18:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:18:32.062222 | orchestrator | 2026-04-13 02:18:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:18:35.105607 | orchestrator | 2026-04-13 02:18:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:18:35.107401 | orchestrator | 2026-04-13 02:18:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:18:35.107529 | orchestrator | 2026-04-13 02:18:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:18:38.157093 | orchestrator | 2026-04-13 02:18:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:18:38.157240 | orchestrator | 2026-04-13 02:18:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:18:38.157280 | orchestrator | 2026-04-13 02:18:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:18:41.199814 | orchestrator | 2026-04-13 02:18:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:18:41.201480 | orchestrator | 2026-04-13 02:18:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:18:41.201536 | orchestrator | 2026-04-13 02:18:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:18:44.245528 | orchestrator | 2026-04-13 02:18:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:18:44.246904 | orchestrator | 2026-04-13 02:18:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:18:44.246956 | orchestrator | 2026-04-13 02:18:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:18:47.296260 | orchestrator | 2026-04-13 02:18:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:18:47.297199 | orchestrator | 2026-04-13 02:18:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:18:47.297234 | orchestrator | 2026-04-13 02:18:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:18:50.342401 | orchestrator | 2026-04-13 02:18:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:18:50.342639 | orchestrator | 2026-04-13 02:18:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:18:50.342660 | orchestrator | 2026-04-13 02:18:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:18:53.385906 | orchestrator | 2026-04-13 02:18:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:18:53.387203 | orchestrator | 2026-04-13 02:18:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:18:53.387243 | orchestrator | 2026-04-13 02:18:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:18:56.441677 | orchestrator | 2026-04-13 02:18:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:18:56.443294 | orchestrator | 2026-04-13 02:18:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:18:56.443551 | orchestrator | 2026-04-13 02:18:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:18:59.490979 | orchestrator | 2026-04-13 02:18:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:18:59.493079 | orchestrator | 2026-04-13 02:18:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:18:59.493261 | orchestrator | 2026-04-13 02:18:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:19:02.549000 | orchestrator | 2026-04-13 02:19:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:19:02.551829 | orchestrator | 2026-04-13 02:19:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:19:02.551926 | orchestrator | 2026-04-13 02:19:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:19:05.605503 | orchestrator | 2026-04-13 02:19:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:19:05.608502 | orchestrator | 2026-04-13 02:19:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:19:05.608553 | orchestrator | 2026-04-13 02:19:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:19:08.656908 | orchestrator | 2026-04-13 02:19:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:19:08.657637 | orchestrator | 2026-04-13 02:19:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:19:08.657674 | orchestrator | 2026-04-13 02:19:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:19:11.707511 | orchestrator | 2026-04-13 02:19:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:19:11.709018 | orchestrator | 2026-04-13 02:19:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:19:11.709064 | orchestrator | 2026-04-13 02:19:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:19:14.755189 | orchestrator | 2026-04-13 02:19:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:19:14.756826 | orchestrator | 2026-04-13 02:19:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:19:14.756923 | orchestrator | 2026-04-13 02:19:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:19:17.801979 | orchestrator | 2026-04-13 02:19:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:19:17.808606 | orchestrator | 2026-04-13 02:19:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:19:17.808672 | orchestrator | 2026-04-13 02:19:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:19:20.860060 | orchestrator | 2026-04-13 02:19:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:19:20.863214 | orchestrator | 2026-04-13 02:19:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:19:20.863300 | orchestrator | 2026-04-13 02:19:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:19:23.914665 | orchestrator | 2026-04-13 02:19:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:19:23.916612 | orchestrator | 2026-04-13 02:19:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:19:23.918103 | orchestrator | 2026-04-13 02:19:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:19:26.965695 | orchestrator | 2026-04-13 02:19:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:19:26.967660 | orchestrator | 2026-04-13 02:19:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:19:26.967737 | orchestrator | 2026-04-13 02:19:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:19:30.014571 | orchestrator | 2026-04-13 02:19:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:19:30.016509 | orchestrator | 2026-04-13 02:19:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:19:30.016584 | orchestrator | 2026-04-13 02:19:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:19:33.064340 | orchestrator | 2026-04-13 02:19:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:19:33.066400 | orchestrator | 2026-04-13 02:19:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:19:33.066563 | orchestrator | 2026-04-13 02:19:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:19:36.119936 | orchestrator | 2026-04-13 02:19:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:19:36.121942 | orchestrator | 2026-04-13 02:19:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:19:36.122280 | orchestrator | 2026-04-13 02:19:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:19:39.176769 | orchestrator | 2026-04-13 02:19:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:19:39.177787 | orchestrator | 2026-04-13 02:19:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:19:39.177922 | orchestrator | 2026-04-13 02:19:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:19:42.223739 | orchestrator | 2026-04-13 02:19:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:19:42.225758 | orchestrator | 2026-04-13 02:19:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:19:42.225807 | orchestrator | 2026-04-13 02:19:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:19:45.277651 | orchestrator | 2026-04-13 02:19:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:19:45.278882 | orchestrator | 2026-04-13 02:19:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:19:45.278947 | orchestrator | 2026-04-13 02:19:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:19:48.316645 | orchestrator | 2026-04-13 02:19:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:19:48.318751 | orchestrator | 2026-04-13 02:19:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:19:48.318805 | orchestrator | 2026-04-13 02:19:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:19:51.369821 | orchestrator | 2026-04-13 02:19:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:19:51.371170 | orchestrator | 2026-04-13 02:19:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:19:51.371253 | orchestrator | 2026-04-13 02:19:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:19:54.418277 | orchestrator | 2026-04-13 02:19:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:19:54.420221 | orchestrator | 2026-04-13 02:19:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:19:54.420265 | orchestrator | 2026-04-13 02:19:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:19:57.462234 | orchestrator | 2026-04-13 02:19:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:19:57.465214 | orchestrator | 2026-04-13 02:19:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:19:57.465260 | orchestrator | 2026-04-13 02:19:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:20:00.517661 | orchestrator | 2026-04-13 02:20:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:20:00.518100 | orchestrator | 2026-04-13 02:20:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:20:00.518469 | orchestrator | 2026-04-13 02:20:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:20:03.569164 | orchestrator | 2026-04-13 02:20:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:20:03.572232 | orchestrator | 2026-04-13 02:20:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:20:03.572312 | orchestrator | 2026-04-13 02:20:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:20:06.619800 | orchestrator | 2026-04-13 02:20:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:20:06.621389 | orchestrator | 2026-04-13 02:20:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:20:06.621509 | orchestrator | 2026-04-13 02:20:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:20:09.662338 | orchestrator | 2026-04-13 02:20:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:20:09.664520 | orchestrator | 2026-04-13 02:20:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:20:09.664555 | orchestrator | 2026-04-13 02:20:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:20:12.710738 | orchestrator | 2026-04-13 02:20:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:20:12.712637 | orchestrator | 2026-04-13 02:20:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:20:12.712711 | orchestrator | 2026-04-13 02:20:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:20:15.758319 | orchestrator | 2026-04-13 02:20:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:20:15.760092 | orchestrator | 2026-04-13 02:20:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:20:15.760179 | orchestrator | 2026-04-13 02:20:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:20:18.809516 | orchestrator | 2026-04-13 02:20:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:20:18.811597 | orchestrator | 2026-04-13 02:20:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:20:18.811649 | orchestrator | 2026-04-13 02:20:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:20:21.860050 | orchestrator | 2026-04-13 02:20:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:20:21.862218 | orchestrator | 2026-04-13 02:20:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:20:21.862259 | orchestrator | 2026-04-13 02:20:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:20:24.914357 | orchestrator | 2026-04-13 02:20:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:20:24.917049 | orchestrator | 2026-04-13 02:20:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:20:24.917084 | orchestrator | 2026-04-13 02:20:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:20:27.965607 | orchestrator | 2026-04-13 02:20:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:20:27.967637 | orchestrator | 2026-04-13 02:20:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:20:27.967715 | orchestrator | 2026-04-13 02:20:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:20:31.022661 | orchestrator | 2026-04-13 02:20:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:20:31.023586 | orchestrator | 2026-04-13 02:20:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:20:31.023780 | orchestrator | 2026-04-13 02:20:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:20:34.072164 | orchestrator | 2026-04-13 02:20:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:20:34.073591 | orchestrator | 2026-04-13 02:20:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:20:34.073646 | orchestrator | 2026-04-13 02:20:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:20:37.126566 | orchestrator | 2026-04-13 02:20:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:20:37.128324 | orchestrator | 2026-04-13 02:20:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:20:37.128361 | orchestrator | 2026-04-13 02:20:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:20:40.178946 | orchestrator | 2026-04-13 02:20:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:20:40.181356 | orchestrator | 2026-04-13 02:20:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:20:40.181473 | orchestrator | 2026-04-13 02:20:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:20:43.223920 | orchestrator | 2026-04-13 02:20:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:20:43.224974 | orchestrator | 2026-04-13 02:20:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:20:43.224999 | orchestrator | 2026-04-13 02:20:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:20:46.271980 | orchestrator | 2026-04-13 02:20:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:20:46.273775 | orchestrator | 2026-04-13 02:20:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:20:46.273875 | orchestrator | 2026-04-13 02:20:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:20:49.322650 | orchestrator | 2026-04-13 02:20:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:20:49.324666 | orchestrator | 2026-04-13 02:20:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:20:49.324721 | orchestrator | 2026-04-13 02:20:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:20:52.377589 | orchestrator | 2026-04-13 02:20:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:20:52.379768 | orchestrator | 2026-04-13 02:20:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:20:52.379815 | orchestrator | 2026-04-13 02:20:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:20:55.424304 | orchestrator | 2026-04-13 02:20:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:20:55.425634 | orchestrator | 2026-04-13 02:20:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:20:55.425672 | orchestrator | 2026-04-13 02:20:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:20:58.473210 | orchestrator | 2026-04-13 02:20:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:20:58.474280 | orchestrator | 2026-04-13 02:20:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:20:58.474517 | orchestrator | 2026-04-13 02:20:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:21:01.519850 | orchestrator | 2026-04-13 02:21:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:21:01.520701 | orchestrator | 2026-04-13 02:21:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:21:01.520822 | orchestrator | 2026-04-13 02:21:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:21:04.575607 | orchestrator | 2026-04-13 02:21:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:21:04.577593 | orchestrator | 2026-04-13 02:21:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:21:04.577683 | orchestrator | 2026-04-13 02:21:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:21:07.629970 | orchestrator | 2026-04-13 02:21:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:21:07.631125 | orchestrator | 2026-04-13 02:21:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:21:07.631215 | orchestrator | 2026-04-13 02:21:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:21:10.677678 | orchestrator | 2026-04-13 02:21:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:21:10.679151 | orchestrator | 2026-04-13 02:21:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:21:10.679187 | orchestrator | 2026-04-13 02:21:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:21:13.726627 | orchestrator | 2026-04-13 02:21:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:21:13.728745 | orchestrator | 2026-04-13 02:21:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:21:13.728835 | orchestrator | 2026-04-13 02:21:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:21:16.778537 | orchestrator | 2026-04-13 02:21:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:21:16.780945 | orchestrator | 2026-04-13 02:21:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:21:16.780989 | orchestrator | 2026-04-13 02:21:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:21:19.834586 | orchestrator | 2026-04-13 02:21:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:21:19.836576 | orchestrator | 2026-04-13 02:21:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:21:19.836692 | orchestrator | 2026-04-13 02:21:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:21:22.883991 | orchestrator | 2026-04-13 02:21:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:21:22.885899 | orchestrator | 2026-04-13 02:21:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:21:22.885932 | orchestrator | 2026-04-13 02:21:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:21:25.934294 | orchestrator | 2026-04-13 02:21:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:21:25.936677 | orchestrator | 2026-04-13 02:21:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:21:25.936765 | orchestrator | 2026-04-13 02:21:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:21:28.989990 | orchestrator | 2026-04-13 02:21:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:21:28.991448 | orchestrator | 2026-04-13 02:21:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:21:28.991578 | orchestrator | 2026-04-13 02:21:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:21:32.038502 | orchestrator | 2026-04-13 02:21:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:21:32.040666 | orchestrator | 2026-04-13 02:21:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:21:32.040777 | orchestrator | 2026-04-13 02:21:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:21:35.093489 | orchestrator | 2026-04-13 02:21:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:21:35.095202 | orchestrator | 2026-04-13 02:21:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:21:35.095228 | orchestrator | 2026-04-13 02:21:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:21:38.150536 | orchestrator | 2026-04-13 02:21:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:21:38.151726 | orchestrator | 2026-04-13 02:21:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:21:38.151755 | orchestrator | 2026-04-13 02:21:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:21:41.201738 | orchestrator | 2026-04-13 02:21:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:21:41.204932 | orchestrator | 2026-04-13 02:21:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:21:41.204977 | orchestrator | 2026-04-13 02:21:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:21:44.251539 | orchestrator | 2026-04-13 02:21:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:21:44.252925 | orchestrator | 2026-04-13 02:21:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:21:44.253032 | orchestrator | 2026-04-13 02:21:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:21:47.293935 | orchestrator | 2026-04-13 02:21:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:21:47.294737 | orchestrator | 2026-04-13 02:21:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:21:47.294801 | orchestrator | 2026-04-13 02:21:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:21:50.344753 | orchestrator | 2026-04-13 02:21:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:21:50.347154 | orchestrator | 2026-04-13 02:21:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:21:50.347199 | orchestrator | 2026-04-13 02:21:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:21:53.396642 | orchestrator | 2026-04-13 02:21:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:21:53.399929 | orchestrator | 2026-04-13 02:21:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:21:53.400163 | orchestrator | 2026-04-13 02:21:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:21:56.453044 | orchestrator | 2026-04-13 02:21:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:21:56.455018 | orchestrator | 2026-04-13 02:21:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:21:56.455090 | orchestrator | 2026-04-13 02:21:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:23:59.619687 | orchestrator | 2026-04-13 02:23:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:23:59.619796 | orchestrator | 2026-04-13 02:23:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:23:59.619811 | orchestrator | 2026-04-13 02:23:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:24:02.669694 | orchestrator | 2026-04-13 02:24:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:24:02.670562 | orchestrator | 2026-04-13 02:24:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:24:02.670609 | orchestrator | 2026-04-13 02:24:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:24:05.719278 | orchestrator | 2026-04-13 02:24:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:24:05.720682 | orchestrator | 2026-04-13 02:24:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:24:05.720829 | orchestrator | 2026-04-13 02:24:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:24:08.768105 | orchestrator | 2026-04-13 02:24:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:24:08.769518 | orchestrator | 2026-04-13 02:24:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:24:08.769579 | orchestrator | 2026-04-13 02:24:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:24:11.812572 | orchestrator | 2026-04-13 02:24:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:24:11.813602 | orchestrator | 2026-04-13 02:24:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:24:11.814102 | orchestrator | 2026-04-13 02:24:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:24:14.865828 | orchestrator | 2026-04-13 02:24:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:24:14.867112 | orchestrator | 2026-04-13 02:24:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:24:14.867159 | orchestrator | 2026-04-13 02:24:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:24:17.907522 | orchestrator | 2026-04-13 02:24:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:24:17.909702 | orchestrator | 2026-04-13 02:24:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:24:17.909750 | orchestrator | 2026-04-13 02:24:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:24:20.958644 | orchestrator | 2026-04-13 02:24:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:24:20.961710 | orchestrator | 2026-04-13 02:24:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:24:20.961789 | orchestrator | 2026-04-13 02:24:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:24:24.007379 | orchestrator | 2026-04-13 02:24:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:24:24.009216 | orchestrator | 2026-04-13 02:24:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:24:24.009279 | orchestrator | 2026-04-13 02:24:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:24:27.052160 | orchestrator | 2026-04-13 02:24:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:24:27.053309 | orchestrator | 2026-04-13 02:24:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:24:27.053409 | orchestrator | 2026-04-13 02:24:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:24:30.097478 | orchestrator | 2026-04-13 02:24:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:24:30.099533 | orchestrator | 2026-04-13 02:24:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:24:30.099560 | orchestrator | 2026-04-13 02:24:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:24:33.138922 | orchestrator | 2026-04-13 02:24:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:24:33.139162 | orchestrator | 2026-04-13 02:24:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:24:33.139184 | orchestrator | 2026-04-13 02:24:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:24:36.192776 | orchestrator | 2026-04-13 02:24:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:24:36.194444 | orchestrator | 2026-04-13 02:24:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:24:36.194802 | orchestrator | 2026-04-13 02:24:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:24:39.241157 | orchestrator | 2026-04-13 02:24:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:24:39.242762 | orchestrator | 2026-04-13 02:24:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:24:39.243035 | orchestrator | 2026-04-13 02:24:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:24:42.291290 | orchestrator | 2026-04-13 02:24:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:24:42.292089 | orchestrator | 2026-04-13 02:24:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:24:42.292192 | orchestrator | 2026-04-13 02:24:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:24:45.331481 | orchestrator | 2026-04-13 02:24:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:24:45.332997 | orchestrator | 2026-04-13 02:24:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:24:45.333047 | orchestrator | 2026-04-13 02:24:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:24:48.378305 | orchestrator | 2026-04-13 02:24:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:24:48.379366 | orchestrator | 2026-04-13 02:24:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:24:48.379410 | orchestrator | 2026-04-13 02:24:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:24:51.426824 | orchestrator | 2026-04-13 02:24:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:24:51.429697 | orchestrator | 2026-04-13 02:24:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:24:51.429765 | orchestrator | 2026-04-13 02:24:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:24:54.469020 | orchestrator | 2026-04-13 02:24:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:24:54.469719 | orchestrator | 2026-04-13 02:24:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:24:54.469756 | orchestrator | 2026-04-13 02:24:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:24:57.504853 | orchestrator | 2026-04-13 02:24:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:24:57.505036 | orchestrator | 2026-04-13 02:24:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:24:57.505055 | orchestrator | 2026-04-13 02:24:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:25:00.544683 | orchestrator | 2026-04-13 02:25:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:25:00.546354 | orchestrator | 2026-04-13 02:25:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:25:00.546423 | orchestrator | 2026-04-13 02:25:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:25:03.585475 | orchestrator | 2026-04-13 02:25:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:25:03.587612 | orchestrator | 2026-04-13 02:25:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:25:03.587680 | orchestrator | 2026-04-13 02:25:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:25:06.633076 | orchestrator | 2026-04-13 02:25:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:25:06.634431 | orchestrator | 2026-04-13 02:25:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:25:06.634586 | orchestrator | 2026-04-13 02:25:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:25:09.673351 | orchestrator | 2026-04-13 02:25:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:25:09.675884 | orchestrator | 2026-04-13 02:25:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:25:09.676047 | orchestrator | 2026-04-13 02:25:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:25:12.712058 | orchestrator | 2026-04-13 02:25:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:25:12.713422 | orchestrator | 2026-04-13 02:25:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:25:12.713491 | orchestrator | 2026-04-13 02:25:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:25:15.745092 | orchestrator | 2026-04-13 02:25:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:25:15.747160 | orchestrator | 2026-04-13 02:25:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:25:15.747225 | orchestrator | 2026-04-13 02:25:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:25:18.796150 | orchestrator | 2026-04-13 02:25:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:25:18.797882 | orchestrator | 2026-04-13 02:25:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:25:18.797972 | orchestrator | 2026-04-13 02:25:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:25:21.846630 | orchestrator | 2026-04-13 02:25:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:25:21.848198 | orchestrator | 2026-04-13 02:25:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:25:21.848254 | orchestrator | 2026-04-13 02:25:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:25:24.890462 | orchestrator | 2026-04-13 02:25:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:25:24.891348 | orchestrator | 2026-04-13 02:25:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:25:24.891396 | orchestrator | 2026-04-13 02:25:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:25:27.941835 | orchestrator | 2026-04-13 02:25:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:25:27.943021 | orchestrator | 2026-04-13 02:25:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:25:27.943144 | orchestrator | 2026-04-13 02:25:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:25:30.992613 | orchestrator | 2026-04-13 02:25:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:25:30.994232 | orchestrator | 2026-04-13 02:25:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:25:30.994279 | orchestrator | 2026-04-13 02:25:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:25:34.042775 | orchestrator | 2026-04-13 02:25:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:25:34.044423 | orchestrator | 2026-04-13 02:25:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:25:34.044527 | orchestrator | 2026-04-13 02:25:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:25:37.085812 | orchestrator | 2026-04-13 02:25:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:25:37.087344 | orchestrator | 2026-04-13 02:25:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:25:37.087494 | orchestrator | 2026-04-13 02:25:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:25:40.134180 | orchestrator | 2026-04-13 02:25:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:25:40.135378 | orchestrator | 2026-04-13 02:25:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:25:40.135512 | orchestrator | 2026-04-13 02:25:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:25:43.178793 | orchestrator | 2026-04-13 02:25:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:25:43.182612 | orchestrator | 2026-04-13 02:25:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:25:43.182645 | orchestrator | 2026-04-13 02:25:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:25:46.224244 | orchestrator | 2026-04-13 02:25:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:25:46.226903 | orchestrator | 2026-04-13 02:25:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:25:46.226955 | orchestrator | 2026-04-13 02:25:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:25:49.269692 | orchestrator | 2026-04-13 02:25:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:25:49.271665 | orchestrator | 2026-04-13 02:25:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:25:49.271744 | orchestrator | 2026-04-13 02:25:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:25:52.316171 | orchestrator | 2026-04-13 02:25:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:25:52.317845 | orchestrator | 2026-04-13 02:25:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:25:52.317907 | orchestrator | 2026-04-13 02:25:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:25:55.365501 | orchestrator | 2026-04-13 02:25:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:25:55.367568 | orchestrator | 2026-04-13 02:25:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:25:55.367620 | orchestrator | 2026-04-13 02:25:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:25:58.409024 | orchestrator | 2026-04-13 02:25:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:25:58.416443 | orchestrator | 2026-04-13 02:25:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:25:58.416529 | orchestrator | 2026-04-13 02:25:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:26:01.466809 | orchestrator | 2026-04-13 02:26:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:26:01.468778 | orchestrator | 2026-04-13 02:26:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:26:01.468913 | orchestrator | 2026-04-13 02:26:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:26:04.506253 | orchestrator | 2026-04-13 02:26:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:26:04.507057 | orchestrator | 2026-04-13 02:26:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:26:04.507208 | orchestrator | 2026-04-13 02:26:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:26:07.552263 | orchestrator | 2026-04-13 02:26:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:26:07.553760 | orchestrator | 2026-04-13 02:26:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:26:07.553793 | orchestrator | 2026-04-13 02:26:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:26:10.595230 | orchestrator | 2026-04-13 02:26:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:26:10.712208 | orchestrator | 2026-04-13 02:26:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:26:10.712323 | orchestrator | 2026-04-13 02:26:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:26:13.638389 | orchestrator | 2026-04-13 02:26:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:26:13.639471 | orchestrator | 2026-04-13 02:26:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:26:13.639513 | orchestrator | 2026-04-13 02:26:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:26:16.687463 | orchestrator | 2026-04-13 02:26:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:26:16.689925 | orchestrator | 2026-04-13 02:26:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:26:16.689971 | orchestrator | 2026-04-13 02:26:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:26:19.738540 | orchestrator | 2026-04-13 02:26:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:26:19.740577 | orchestrator | 2026-04-13 02:26:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:26:19.740638 | orchestrator | 2026-04-13 02:26:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:26:22.787169 | orchestrator | 2026-04-13 02:26:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:26:22.789370 | orchestrator | 2026-04-13 02:26:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:26:22.789448 | orchestrator | 2026-04-13 02:26:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:26:25.831793 | orchestrator | 2026-04-13 02:26:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:26:25.835414 | orchestrator | 2026-04-13 02:26:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:26:25.835497 | orchestrator | 2026-04-13 02:26:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:26:28.869070 | orchestrator | 2026-04-13 02:26:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:26:28.870108 | orchestrator | 2026-04-13 02:26:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:26:28.870263 | orchestrator | 2026-04-13 02:26:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:26:31.909778 | orchestrator | 2026-04-13 02:26:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:26:31.911561 | orchestrator | 2026-04-13 02:26:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:26:31.911621 | orchestrator | 2026-04-13 02:26:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:26:34.963242 | orchestrator | 2026-04-13 02:26:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:26:34.964327 | orchestrator | 2026-04-13 02:26:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:26:34.965255 | orchestrator | 2026-04-13 02:26:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:26:38.008120 | orchestrator | 2026-04-13 02:26:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:26:38.009424 | orchestrator | 2026-04-13 02:26:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:26:38.009451 | orchestrator | 2026-04-13 02:26:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:26:41.062499 | orchestrator | 2026-04-13 02:26:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:26:41.063944 | orchestrator | 2026-04-13 02:26:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:26:41.064018 | orchestrator | 2026-04-13 02:26:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:26:44.108485 | orchestrator | 2026-04-13 02:26:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:26:44.110603 | orchestrator | 2026-04-13 02:26:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:26:44.110651 | orchestrator | 2026-04-13 02:26:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:26:47.165152 | orchestrator | 2026-04-13 02:26:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:26:47.168503 | orchestrator | 2026-04-13 02:26:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:26:47.168579 | orchestrator | 2026-04-13 02:26:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:26:50.212558 | orchestrator | 2026-04-13 02:26:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:26:50.213631 | orchestrator | 2026-04-13 02:26:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:26:50.213694 | orchestrator | 2026-04-13 02:26:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:26:53.257745 | orchestrator | 2026-04-13 02:26:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:26:53.259162 | orchestrator | 2026-04-13 02:26:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:26:53.259204 | orchestrator | 2026-04-13 02:26:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:26:56.309408 | orchestrator | 2026-04-13 02:26:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:26:56.311860 | orchestrator | 2026-04-13 02:26:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:26:56.312025 | orchestrator | 2026-04-13 02:26:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:26:59.351933 | orchestrator | 2026-04-13 02:26:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:26:59.353529 | orchestrator | 2026-04-13 02:26:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:26:59.353580 | orchestrator | 2026-04-13 02:26:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:27:02.400230 | orchestrator | 2026-04-13 02:27:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:27:02.402125 | orchestrator | 2026-04-13 02:27:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:27:02.402161 | orchestrator | 2026-04-13 02:27:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:27:05.448658 | orchestrator | 2026-04-13 02:27:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:27:05.448994 | orchestrator | 2026-04-13 02:27:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:27:05.449023 | orchestrator | 2026-04-13 02:27:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:27:08.497112 | orchestrator | 2026-04-13 02:27:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:27:08.500162 | orchestrator | 2026-04-13 02:27:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:27:08.500201 | orchestrator | 2026-04-13 02:27:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:27:11.543388 | orchestrator | 2026-04-13 02:27:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:27:11.544872 | orchestrator | 2026-04-13 02:27:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:27:11.544915 | orchestrator | 2026-04-13 02:27:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:27:14.587978 | orchestrator | 2026-04-13 02:27:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:27:14.589485 | orchestrator | 2026-04-13 02:27:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:27:14.589697 | orchestrator | 2026-04-13 02:27:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:27:17.633602 | orchestrator | 2026-04-13 02:27:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:27:17.635607 | orchestrator | 2026-04-13 02:27:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:27:17.635713 | orchestrator | 2026-04-13 02:27:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:27:20.678465 | orchestrator | 2026-04-13 02:27:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:27:20.681543 | orchestrator | 2026-04-13 02:27:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:27:20.681588 | orchestrator | 2026-04-13 02:27:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:27:23.718947 | orchestrator | 2026-04-13 02:27:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:27:23.719234 | orchestrator | 2026-04-13 02:27:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:27:23.719260 | orchestrator | 2026-04-13 02:27:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:27:26.758411 | orchestrator | 2026-04-13 02:27:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:27:26.760481 | orchestrator | 2026-04-13 02:27:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:27:26.760580 | orchestrator | 2026-04-13 02:27:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:27:29.795775 | orchestrator | 2026-04-13 02:27:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:27:29.796754 | orchestrator | 2026-04-13 02:27:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:27:29.796804 | orchestrator | 2026-04-13 02:27:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:27:32.832988 | orchestrator | 2026-04-13 02:27:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:27:32.834347 | orchestrator | 2026-04-13 02:27:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:27:32.834386 | orchestrator | 2026-04-13 02:27:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:27:35.871395 | orchestrator | 2026-04-13 02:27:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:27:35.872805 | orchestrator | 2026-04-13 02:27:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:27:35.872841 | orchestrator | 2026-04-13 02:27:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:27:38.922176 | orchestrator | 2026-04-13 02:27:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:27:38.923758 | orchestrator | 2026-04-13 02:27:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:27:38.923797 | orchestrator | 2026-04-13 02:27:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:27:41.971794 | orchestrator | 2026-04-13 02:27:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:27:41.972885 | orchestrator | 2026-04-13 02:27:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:27:41.972978 | orchestrator | 2026-04-13 02:27:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:27:45.022111 | orchestrator | 2026-04-13 02:27:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:27:45.023532 | orchestrator | 2026-04-13 02:27:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:27:45.023583 | orchestrator | 2026-04-13 02:27:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:27:48.069913 | orchestrator | 2026-04-13 02:27:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:27:48.071804 | orchestrator | 2026-04-13 02:27:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:27:48.071940 | orchestrator | 2026-04-13 02:27:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:27:51.113918 | orchestrator | 2026-04-13 02:27:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:27:51.114536 | orchestrator | 2026-04-13 02:27:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:27:51.114562 | orchestrator | 2026-04-13 02:27:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:27:54.157250 | orchestrator | 2026-04-13 02:27:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:27:54.158201 | orchestrator | 2026-04-13 02:27:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:27:54.158240 | orchestrator | 2026-04-13 02:27:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:27:57.216959 | orchestrator | 2026-04-13 02:27:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:27:57.218752 | orchestrator | 2026-04-13 02:27:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:27:57.218828 | orchestrator | 2026-04-13 02:27:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:28:00.262492 | orchestrator | 2026-04-13 02:28:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:28:00.264686 | orchestrator | 2026-04-13 02:28:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:28:00.264794 | orchestrator | 2026-04-13 02:28:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:28:03.309170 | orchestrator | 2026-04-13 02:28:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:28:03.311415 | orchestrator | 2026-04-13 02:28:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:28:03.311493 | orchestrator | 2026-04-13 02:28:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:28:06.356352 | orchestrator | 2026-04-13 02:28:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:28:06.357471 | orchestrator | 2026-04-13 02:28:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:28:06.357707 | orchestrator | 2026-04-13 02:28:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:28:09.411028 | orchestrator | 2026-04-13 02:28:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:28:09.412393 | orchestrator | 2026-04-13 02:28:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:28:09.412568 | orchestrator | 2026-04-13 02:28:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:28:12.459806 | orchestrator | 2026-04-13 02:28:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:28:12.461527 | orchestrator | 2026-04-13 02:28:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:28:12.461578 | orchestrator | 2026-04-13 02:28:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:28:15.510889 | orchestrator | 2026-04-13 02:28:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:28:15.512123 | orchestrator | 2026-04-13 02:28:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:28:15.512146 | orchestrator | 2026-04-13 02:28:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:28:18.557848 | orchestrator | 2026-04-13 02:28:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:28:18.559833 | orchestrator | 2026-04-13 02:28:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:28:18.559885 | orchestrator | 2026-04-13 02:28:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:28:21.606778 | orchestrator | 2026-04-13 02:28:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:28:21.608044 | orchestrator | 2026-04-13 02:28:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:28:21.608249 | orchestrator | 2026-04-13 02:28:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:28:24.654605 | orchestrator | 2026-04-13 02:28:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:28:24.655479 | orchestrator | 2026-04-13 02:28:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:28:24.655658 | orchestrator | 2026-04-13 02:28:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:28:27.703659 | orchestrator | 2026-04-13 02:28:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:28:27.705721 | orchestrator | 2026-04-13 02:28:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:28:27.705761 | orchestrator | 2026-04-13 02:28:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:28:30.751480 | orchestrator | 2026-04-13 02:28:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:28:30.753380 | orchestrator | 2026-04-13 02:28:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:28:30.753483 | orchestrator | 2026-04-13 02:28:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:28:33.799023 | orchestrator | 2026-04-13 02:28:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:28:33.800943 | orchestrator | 2026-04-13 02:28:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:28:33.800992 | orchestrator | 2026-04-13 02:28:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:28:36.849402 | orchestrator | 2026-04-13 02:28:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:28:36.852105 | orchestrator | 2026-04-13 02:28:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:28:36.852175 | orchestrator | 2026-04-13 02:28:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:28:39.899867 | orchestrator | 2026-04-13 02:28:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:28:39.901474 | orchestrator | 2026-04-13 02:28:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:28:39.901578 | orchestrator | 2026-04-13 02:28:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:28:42.950553 | orchestrator | 2026-04-13 02:28:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:28:42.952512 | orchestrator | 2026-04-13 02:28:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:28:42.952547 | orchestrator | 2026-04-13 02:28:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:28:46.001719 | orchestrator | 2026-04-13 02:28:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:28:46.003430 | orchestrator | 2026-04-13 02:28:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:28:46.003511 | orchestrator | 2026-04-13 02:28:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:28:49.050498 | orchestrator | 2026-04-13 02:28:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:28:49.051849 | orchestrator | 2026-04-13 02:28:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:28:49.051976 | orchestrator | 2026-04-13 02:28:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:28:52.092651 | orchestrator | 2026-04-13 02:28:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:28:52.094077 | orchestrator | 2026-04-13 02:28:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:28:52.094165 | orchestrator | 2026-04-13 02:28:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:28:55.146572 | orchestrator | 2026-04-13 02:28:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:28:55.148113 | orchestrator | 2026-04-13 02:28:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:28:55.148137 | orchestrator | 2026-04-13 02:28:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:28:58.189710 | orchestrator | 2026-04-13 02:28:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:28:58.190957 | orchestrator | 2026-04-13 02:28:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:28:58.190999 | orchestrator | 2026-04-13 02:28:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:29:01.236046 | orchestrator | 2026-04-13 02:29:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:29:01.237161 | orchestrator | 2026-04-13 02:29:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:29:01.237207 | orchestrator | 2026-04-13 02:29:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:29:04.284695 | orchestrator | 2026-04-13 02:29:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:29:04.286180 | orchestrator | 2026-04-13 02:29:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:29:04.286430 | orchestrator | 2026-04-13 02:29:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:29:07.329226 | orchestrator | 2026-04-13 02:29:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:29:07.330173 | orchestrator | 2026-04-13 02:29:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:29:07.330249 | orchestrator | 2026-04-13 02:29:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:29:10.379124 | orchestrator | 2026-04-13 02:29:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:29:10.381053 | orchestrator | 2026-04-13 02:29:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:29:10.381091 | orchestrator | 2026-04-13 02:29:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:29:13.425752 | orchestrator | 2026-04-13 02:29:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:29:13.427994 | orchestrator | 2026-04-13 02:29:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:29:13.428047 | orchestrator | 2026-04-13 02:29:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:29:16.472061 | orchestrator | 2026-04-13 02:29:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:29:16.473959 | orchestrator | 2026-04-13 02:29:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:29:16.474091 | orchestrator | 2026-04-13 02:29:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:29:19.521566 | orchestrator | 2026-04-13 02:29:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:29:19.524403 | orchestrator | 2026-04-13 02:29:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:29:19.524465 | orchestrator | 2026-04-13 02:29:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:29:22.565232 | orchestrator | 2026-04-13 02:29:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:29:22.567506 | orchestrator | 2026-04-13 02:29:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:29:22.567592 | orchestrator | 2026-04-13 02:29:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:29:25.614987 | orchestrator | 2026-04-13 02:29:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:29:25.616383 | orchestrator | 2026-04-13 02:29:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:29:25.616450 | orchestrator | 2026-04-13 02:29:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:29:28.656601 | orchestrator | 2026-04-13 02:29:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:29:28.657656 | orchestrator | 2026-04-13 02:29:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:29:28.657691 | orchestrator | 2026-04-13 02:29:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:29:31.711877 | orchestrator | 2026-04-13 02:29:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:29:31.715391 | orchestrator | 2026-04-13 02:29:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:29:31.715460 | orchestrator | 2026-04-13 02:29:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:29:34.770776 | orchestrator | 2026-04-13 02:29:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:29:34.772693 | orchestrator | 2026-04-13 02:29:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:29:34.772786 | orchestrator | 2026-04-13 02:29:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:29:37.820794 | orchestrator | 2026-04-13 02:29:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:29:37.821811 | orchestrator | 2026-04-13 02:29:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:29:37.821887 | orchestrator | 2026-04-13 02:29:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:29:40.873158 | orchestrator | 2026-04-13 02:29:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:29:40.874990 | orchestrator | 2026-04-13 02:29:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:29:40.875043 | orchestrator | 2026-04-13 02:29:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:29:43.928101 | orchestrator | 2026-04-13 02:29:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:29:43.929615 | orchestrator | 2026-04-13 02:29:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:29:43.929697 | orchestrator | 2026-04-13 02:29:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:29:46.981978 | orchestrator | 2026-04-13 02:29:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:29:46.984093 | orchestrator | 2026-04-13 02:29:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:29:46.984449 | orchestrator | 2026-04-13 02:29:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:29:50.034573 | orchestrator | 2026-04-13 02:29:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:29:50.035649 | orchestrator | 2026-04-13 02:29:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:29:50.035717 | orchestrator | 2026-04-13 02:29:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:29:53.080222 | orchestrator | 2026-04-13 02:29:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:29:53.082544 | orchestrator | 2026-04-13 02:29:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:29:53.082585 | orchestrator | 2026-04-13 02:29:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:29:56.133596 | orchestrator | 2026-04-13 02:29:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:29:56.134936 | orchestrator | 2026-04-13 02:29:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:29:56.134984 | orchestrator | 2026-04-13 02:29:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:29:59.182189 | orchestrator | 2026-04-13 02:29:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:29:59.183328 | orchestrator | 2026-04-13 02:29:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:29:59.183374 | orchestrator | 2026-04-13 02:29:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:30:02.231789 | orchestrator | 2026-04-13 02:30:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:30:02.233215 | orchestrator | 2026-04-13 02:30:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:30:02.233248 | orchestrator | 2026-04-13 02:30:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:30:05.283585 | orchestrator | 2026-04-13 02:30:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:30:05.285572 | orchestrator | 2026-04-13 02:30:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:30:05.285682 | orchestrator | 2026-04-13 02:30:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:30:08.329068 | orchestrator | 2026-04-13 02:30:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:30:08.330559 | orchestrator | 2026-04-13 02:30:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:30:08.330614 | orchestrator | 2026-04-13 02:30:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:30:11.371347 | orchestrator | 2026-04-13 02:30:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:30:11.372946 | orchestrator | 2026-04-13 02:30:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:30:11.372977 | orchestrator | 2026-04-13 02:30:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:30:14.411723 | orchestrator | 2026-04-13 02:30:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:30:14.413495 | orchestrator | 2026-04-13 02:30:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:30:14.413543 | orchestrator | 2026-04-13 02:30:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:30:17.461235 | orchestrator | 2026-04-13 02:30:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:30:17.462445 | orchestrator | 2026-04-13 02:30:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:30:17.462463 | orchestrator | 2026-04-13 02:30:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:30:20.506517 | orchestrator | 2026-04-13 02:30:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:30:20.508359 | orchestrator | 2026-04-13 02:30:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:30:20.508631 | orchestrator | 2026-04-13 02:30:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:30:23.556729 | orchestrator | 2026-04-13 02:30:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:30:23.558247 | orchestrator | 2026-04-13 02:30:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:30:23.558318 | orchestrator | 2026-04-13 02:30:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:30:26.599754 | orchestrator | 2026-04-13 02:30:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:30:26.600423 | orchestrator | 2026-04-13 02:30:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:30:26.600472 | orchestrator | 2026-04-13 02:30:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:30:29.649871 | orchestrator | 2026-04-13 02:30:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:30:29.651878 | orchestrator | 2026-04-13 02:30:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:30:29.651977 | orchestrator | 2026-04-13 02:30:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:30:32.697661 | orchestrator | 2026-04-13 02:30:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:30:32.699217 | orchestrator | 2026-04-13 02:30:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:30:32.699359 | orchestrator | 2026-04-13 02:30:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:30:35.746823 | orchestrator | 2026-04-13 02:30:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:30:35.749206 | orchestrator | 2026-04-13 02:30:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:30:35.749398 | orchestrator | 2026-04-13 02:30:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:30:38.803241 | orchestrator | 2026-04-13 02:30:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:30:38.806413 | orchestrator | 2026-04-13 02:30:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:30:38.806485 | orchestrator | 2026-04-13 02:30:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:30:41.858999 | orchestrator | 2026-04-13 02:30:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:30:41.860047 | orchestrator | 2026-04-13 02:30:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:30:41.860076 | orchestrator | 2026-04-13 02:30:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:30:44.909864 | orchestrator | 2026-04-13 02:30:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:30:44.911308 | orchestrator | 2026-04-13 02:30:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:30:44.911388 | orchestrator | 2026-04-13 02:30:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:30:47.956942 | orchestrator | 2026-04-13 02:30:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:30:47.958749 | orchestrator | 2026-04-13 02:30:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:30:47.959052 | orchestrator | 2026-04-13 02:30:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:30:51.004034 | orchestrator | 2026-04-13 02:30:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:30:51.005133 | orchestrator | 2026-04-13 02:30:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:30:51.005199 | orchestrator | 2026-04-13 02:30:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:30:54.051069 | orchestrator | 2026-04-13 02:30:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:30:54.052611 | orchestrator | 2026-04-13 02:30:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:30:54.052682 | orchestrator | 2026-04-13 02:30:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:30:57.098118 | orchestrator | 2026-04-13 02:30:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:30:57.099227 | orchestrator | 2026-04-13 02:30:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:30:57.099313 | orchestrator | 2026-04-13 02:30:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:31:00.139737 | orchestrator | 2026-04-13 02:31:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:31:00.142558 | orchestrator | 2026-04-13 02:31:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:31:00.143055 | orchestrator | 2026-04-13 02:31:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:31:03.189560 | orchestrator | 2026-04-13 02:31:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:31:03.192252 | orchestrator | 2026-04-13 02:31:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:31:03.192306 | orchestrator | 2026-04-13 02:31:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:31:06.236803 | orchestrator | 2026-04-13 02:31:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:31:06.238614 | orchestrator | 2026-04-13 02:31:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:31:06.238669 | orchestrator | 2026-04-13 02:31:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:31:09.287869 | orchestrator | 2026-04-13 02:31:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:31:09.288587 | orchestrator | 2026-04-13 02:31:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:31:09.288640 | orchestrator | 2026-04-13 02:31:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:31:12.345607 | orchestrator | 2026-04-13 02:31:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:31:12.347857 | orchestrator | 2026-04-13 02:31:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:31:12.348384 | orchestrator | 2026-04-13 02:31:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:31:15.395550 | orchestrator | 2026-04-13 02:31:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:31:15.397961 | orchestrator | 2026-04-13 02:31:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:31:15.398274 | orchestrator | 2026-04-13 02:31:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:31:18.448567 | orchestrator | 2026-04-13 02:31:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:31:18.449131 | orchestrator | 2026-04-13 02:31:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:31:18.449170 | orchestrator | 2026-04-13 02:31:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:31:21.499527 | orchestrator | 2026-04-13 02:31:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:31:21.500727 | orchestrator | 2026-04-13 02:31:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:31:21.500953 | orchestrator | 2026-04-13 02:31:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:31:24.547050 | orchestrator | 2026-04-13 02:31:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:31:24.548708 | orchestrator | 2026-04-13 02:31:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:31:24.548757 | orchestrator | 2026-04-13 02:31:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:31:27.599446 | orchestrator | 2026-04-13 02:31:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:31:27.602214 | orchestrator | 2026-04-13 02:31:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:31:27.602264 | orchestrator | 2026-04-13 02:31:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:31:30.639946 | orchestrator | 2026-04-13 02:31:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:31:30.641584 | orchestrator | 2026-04-13 02:31:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:31:30.641632 | orchestrator | 2026-04-13 02:31:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:31:33.691308 | orchestrator | 2026-04-13 02:31:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:31:33.693373 | orchestrator | 2026-04-13 02:31:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:31:33.693419 | orchestrator | 2026-04-13 02:31:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:31:36.740622 | orchestrator | 2026-04-13 02:31:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:31:36.742277 | orchestrator | 2026-04-13 02:31:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:31:36.742315 | orchestrator | 2026-04-13 02:31:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:31:39.792016 | orchestrator | 2026-04-13 02:31:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:31:39.793138 | orchestrator | 2026-04-13 02:31:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:31:39.793252 | orchestrator | 2026-04-13 02:31:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:31:42.837729 | orchestrator | 2026-04-13 02:31:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:31:42.838737 | orchestrator | 2026-04-13 02:31:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:31:42.838825 | orchestrator | 2026-04-13 02:31:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:31:45.890964 | orchestrator | 2026-04-13 02:31:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:31:45.893999 | orchestrator | 2026-04-13 02:31:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:31:45.894230 | orchestrator | 2026-04-13 02:31:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:31:48.944303 | orchestrator | 2026-04-13 02:31:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:31:48.945898 | orchestrator | 2026-04-13 02:31:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:31:48.945953 | orchestrator | 2026-04-13 02:31:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:31:51.995812 | orchestrator | 2026-04-13 02:31:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:31:51.998276 | orchestrator | 2026-04-13 02:31:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:31:51.998339 | orchestrator | 2026-04-13 02:31:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:31:55.048633 | orchestrator | 2026-04-13 02:31:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:31:55.049818 | orchestrator | 2026-04-13 02:31:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:31:55.049841 | orchestrator | 2026-04-13 02:31:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:31:58.101735 | orchestrator | 2026-04-13 02:31:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:31:58.104111 | orchestrator | 2026-04-13 02:31:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:31:58.104159 | orchestrator | 2026-04-13 02:31:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:32:01.150721 | orchestrator | 2026-04-13 02:32:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:32:01.151933 | orchestrator | 2026-04-13 02:32:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:32:01.151971 | orchestrator | 2026-04-13 02:32:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:32:04.201040 | orchestrator | 2026-04-13 02:32:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:32:04.203195 | orchestrator | 2026-04-13 02:32:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:32:04.203256 | orchestrator | 2026-04-13 02:32:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:32:07.248436 | orchestrator | 2026-04-13 02:32:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:32:07.250872 | orchestrator | 2026-04-13 02:32:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:32:07.250916 | orchestrator | 2026-04-13 02:32:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:32:10.303884 | orchestrator | 2026-04-13 02:32:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:32:10.305242 | orchestrator | 2026-04-13 02:32:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:32:10.305316 | orchestrator | 2026-04-13 02:32:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:32:13.353609 | orchestrator | 2026-04-13 02:32:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:32:13.354163 | orchestrator | 2026-04-13 02:32:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:32:13.354198 | orchestrator | 2026-04-13 02:32:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:32:16.405246 | orchestrator | 2026-04-13 02:32:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:32:16.406153 | orchestrator | 2026-04-13 02:32:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:32:16.406212 | orchestrator | 2026-04-13 02:32:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:32:19.454709 | orchestrator | 2026-04-13 02:32:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:32:19.457422 | orchestrator | 2026-04-13 02:32:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:32:19.457509 | orchestrator | 2026-04-13 02:32:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:32:22.509109 | orchestrator | 2026-04-13 02:32:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:32:22.511216 | orchestrator | 2026-04-13 02:32:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:32:22.511264 | orchestrator | 2026-04-13 02:32:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:32:25.565098 | orchestrator | 2026-04-13 02:32:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:32:25.566400 | orchestrator | 2026-04-13 02:32:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:32:25.566437 | orchestrator | 2026-04-13 02:32:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:32:28.620476 | orchestrator | 2026-04-13 02:32:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:32:28.622901 | orchestrator | 2026-04-13 02:32:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:32:28.622957 | orchestrator | 2026-04-13 02:32:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:32:31.668028 | orchestrator | 2026-04-13 02:32:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:32:31.669770 | orchestrator | 2026-04-13 02:32:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:32:31.669817 | orchestrator | 2026-04-13 02:32:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:32:34.728278 | orchestrator | 2026-04-13 02:32:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:32:34.730255 | orchestrator | 2026-04-13 02:32:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:32:34.730525 | orchestrator | 2026-04-13 02:32:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:32:37.777169 | orchestrator | 2026-04-13 02:32:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:32:37.778813 | orchestrator | 2026-04-13 02:32:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:32:37.778854 | orchestrator | 2026-04-13 02:32:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:32:40.830254 | orchestrator | 2026-04-13 02:32:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:32:40.831828 | orchestrator | 2026-04-13 02:32:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:32:40.831867 | orchestrator | 2026-04-13 02:32:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:32:43.886805 | orchestrator | 2026-04-13 02:32:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:32:43.889070 | orchestrator | 2026-04-13 02:32:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:32:43.889134 | orchestrator | 2026-04-13 02:32:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:32:46.940478 | orchestrator | 2026-04-13 02:32:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:32:46.942141 | orchestrator | 2026-04-13 02:32:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:32:46.942226 | orchestrator | 2026-04-13 02:32:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:32:49.993175 | orchestrator | 2026-04-13 02:32:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:32:49.995661 | orchestrator | 2026-04-13 02:32:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:32:49.995774 | orchestrator | 2026-04-13 02:32:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:32:53.052266 | orchestrator | 2026-04-13 02:32:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:32:53.054002 | orchestrator | 2026-04-13 02:32:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:32:53.054099 | orchestrator | 2026-04-13 02:32:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:32:56.109642 | orchestrator | 2026-04-13 02:32:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:32:56.112497 | orchestrator | 2026-04-13 02:32:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:32:56.112576 | orchestrator | 2026-04-13 02:32:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:32:59.164623 | orchestrator | 2026-04-13 02:32:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:32:59.167066 | orchestrator | 2026-04-13 02:32:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:32:59.167134 | orchestrator | 2026-04-13 02:32:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:33:02.210156 | orchestrator | 2026-04-13 02:33:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:33:02.211018 | orchestrator | 2026-04-13 02:33:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:33:02.211260 | orchestrator | 2026-04-13 02:33:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:33:05.262352 | orchestrator | 2026-04-13 02:33:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:33:05.264470 | orchestrator | 2026-04-13 02:33:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:33:05.264637 | orchestrator | 2026-04-13 02:33:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:33:08.308748 | orchestrator | 2026-04-13 02:33:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:33:08.310314 | orchestrator | 2026-04-13 02:33:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:33:08.310358 | orchestrator | 2026-04-13 02:33:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:33:11.355056 | orchestrator | 2026-04-13 02:33:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:33:11.357209 | orchestrator | 2026-04-13 02:33:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:33:11.357468 | orchestrator | 2026-04-13 02:33:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:33:14.415756 | orchestrator | 2026-04-13 02:33:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:33:14.417454 | orchestrator | 2026-04-13 02:33:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:33:14.417619 | orchestrator | 2026-04-13 02:33:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:33:17.471235 | orchestrator | 2026-04-13 02:33:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:33:17.473318 | orchestrator | 2026-04-13 02:33:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:33:17.473386 | orchestrator | 2026-04-13 02:33:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:33:20.527156 | orchestrator | 2026-04-13 02:33:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:33:20.528195 | orchestrator | 2026-04-13 02:33:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:33:20.528361 | orchestrator | 2026-04-13 02:33:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:33:23.579108 | orchestrator | 2026-04-13 02:33:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:33:23.583138 | orchestrator | 2026-04-13 02:33:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:33:23.583349 | orchestrator | 2026-04-13 02:33:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:33:26.634505 | orchestrator | 2026-04-13 02:33:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:33:26.637195 | orchestrator | 2026-04-13 02:33:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:33:26.637275 | orchestrator | 2026-04-13 02:33:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:33:29.692228 | orchestrator | 2026-04-13 02:33:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:33:29.693786 | orchestrator | 2026-04-13 02:33:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:33:29.693821 | orchestrator | 2026-04-13 02:33:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:33:32.744217 | orchestrator | 2026-04-13 02:33:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:33:32.744380 | orchestrator | 2026-04-13 02:33:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:33:32.744397 | orchestrator | 2026-04-13 02:33:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:33:35.800173 | orchestrator | 2026-04-13 02:33:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:33:35.801384 | orchestrator | 2026-04-13 02:33:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:33:35.801432 | orchestrator | 2026-04-13 02:33:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:33:38.850933 | orchestrator | 2026-04-13 02:33:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:33:38.851647 | orchestrator | 2026-04-13 02:33:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:33:38.851810 | orchestrator | 2026-04-13 02:33:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:33:41.888784 | orchestrator | 2026-04-13 02:33:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:33:41.889652 | orchestrator | 2026-04-13 02:33:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:33:41.889689 | orchestrator | 2026-04-13 02:33:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:33:44.943540 | orchestrator | 2026-04-13 02:33:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:33:44.945043 | orchestrator | 2026-04-13 02:33:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:33:44.945100 | orchestrator | 2026-04-13 02:33:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:33:47.995007 | orchestrator | 2026-04-13 02:33:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:33:47.997645 | orchestrator | 2026-04-13 02:33:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:33:47.997695 | orchestrator | 2026-04-13 02:33:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:33:51.044554 | orchestrator | 2026-04-13 02:33:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:33:51.046736 | orchestrator | 2026-04-13 02:33:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:33:51.046948 | orchestrator | 2026-04-13 02:33:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:33:54.093253 | orchestrator | 2026-04-13 02:33:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:33:54.096483 | orchestrator | 2026-04-13 02:33:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:33:54.096543 | orchestrator | 2026-04-13 02:33:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:33:57.139994 | orchestrator | 2026-04-13 02:33:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:33:57.141062 | orchestrator | 2026-04-13 02:33:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:33:57.141107 | orchestrator | 2026-04-13 02:33:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:34:00.193529 | orchestrator | 2026-04-13 02:34:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:34:00.194247 | orchestrator | 2026-04-13 02:34:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:34:00.194285 | orchestrator | 2026-04-13 02:34:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:34:03.242278 | orchestrator | 2026-04-13 02:34:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:34:03.243771 | orchestrator | 2026-04-13 02:34:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:34:03.244075 | orchestrator | 2026-04-13 02:34:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:34:06.290290 | orchestrator | 2026-04-13 02:34:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:34:06.292874 | orchestrator | 2026-04-13 02:34:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:34:06.293173 | orchestrator | 2026-04-13 02:34:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:34:09.338117 | orchestrator | 2026-04-13 02:34:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:34:09.340049 | orchestrator | 2026-04-13 02:34:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:34:09.340117 | orchestrator | 2026-04-13 02:34:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:34:12.396779 | orchestrator | 2026-04-13 02:34:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:34:12.404856 | orchestrator | 2026-04-13 02:34:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:34:12.405742 | orchestrator | 2026-04-13 02:34:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:34:15.466280 | orchestrator | 2026-04-13 02:34:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:34:15.468176 | orchestrator | 2026-04-13 02:34:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:34:15.468226 | orchestrator | 2026-04-13 02:34:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:34:18.525242 | orchestrator | 2026-04-13 02:34:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:34:18.529007 | orchestrator | 2026-04-13 02:34:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:34:18.529056 | orchestrator | 2026-04-13 02:34:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:34:21.581077 | orchestrator | 2026-04-13 02:34:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:34:21.581923 | orchestrator | 2026-04-13 02:34:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:34:21.582106 | orchestrator | 2026-04-13 02:34:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:34:24.635924 | orchestrator | 2026-04-13 02:34:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:34:24.637510 | orchestrator | 2026-04-13 02:34:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:34:24.637544 | orchestrator | 2026-04-13 02:34:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:34:27.680811 | orchestrator | 2026-04-13 02:34:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:34:27.682203 | orchestrator | 2026-04-13 02:34:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:34:27.682266 | orchestrator | 2026-04-13 02:34:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:34:30.733934 | orchestrator | 2026-04-13 02:34:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:34:30.736492 | orchestrator | 2026-04-13 02:34:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:34:30.736551 | orchestrator | 2026-04-13 02:34:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:34:33.786331 | orchestrator | 2026-04-13 02:34:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:34:33.788020 | orchestrator | 2026-04-13 02:34:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:34:33.788206 | orchestrator | 2026-04-13 02:34:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:34:36.836141 | orchestrator | 2026-04-13 02:34:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:34:36.837271 | orchestrator | 2026-04-13 02:34:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:34:36.837337 | orchestrator | 2026-04-13 02:34:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:34:39.885882 | orchestrator | 2026-04-13 02:34:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:34:39.887809 | orchestrator | 2026-04-13 02:34:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:34:39.887866 | orchestrator | 2026-04-13 02:34:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:34:42.933332 | orchestrator | 2026-04-13 02:34:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:34:42.934389 | orchestrator | 2026-04-13 02:34:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:34:42.934437 | orchestrator | 2026-04-13 02:34:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:34:45.986858 | orchestrator | 2026-04-13 02:34:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:34:45.987624 | orchestrator | 2026-04-13 02:34:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:34:45.987658 | orchestrator | 2026-04-13 02:34:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:34:49.036771 | orchestrator | 2026-04-13 02:34:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:34:49.038171 | orchestrator | 2026-04-13 02:34:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:34:49.038197 | orchestrator | 2026-04-13 02:34:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:34:52.086486 | orchestrator | 2026-04-13 02:34:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:34:52.088556 | orchestrator | 2026-04-13 02:34:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:34:52.088662 | orchestrator | 2026-04-13 02:34:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:34:55.142753 | orchestrator | 2026-04-13 02:34:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:34:55.145578 | orchestrator | 2026-04-13 02:34:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:34:55.145731 | orchestrator | 2026-04-13 02:34:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:34:58.197147 | orchestrator | 2026-04-13 02:34:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:34:58.199581 | orchestrator | 2026-04-13 02:34:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:34:58.199653 | orchestrator | 2026-04-13 02:34:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:35:01.247665 | orchestrator | 2026-04-13 02:35:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:35:01.249345 | orchestrator | 2026-04-13 02:35:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:35:01.249382 | orchestrator | 2026-04-13 02:35:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:35:04.298493 | orchestrator | 2026-04-13 02:35:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:35:04.301677 | orchestrator | 2026-04-13 02:35:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:35:04.302202 | orchestrator | 2026-04-13 02:35:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:35:07.347940 | orchestrator | 2026-04-13 02:35:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:35:07.349434 | orchestrator | 2026-04-13 02:35:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:35:07.349686 | orchestrator | 2026-04-13 02:35:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:35:10.400557 | orchestrator | 2026-04-13 02:35:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:35:10.402194 | orchestrator | 2026-04-13 02:35:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:35:10.402251 | orchestrator | 2026-04-13 02:35:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:35:13.454674 | orchestrator | 2026-04-13 02:35:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:35:13.455797 | orchestrator | 2026-04-13 02:35:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:35:13.455861 | orchestrator | 2026-04-13 02:35:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:35:16.502414 | orchestrator | 2026-04-13 02:35:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:35:16.503948 | orchestrator | 2026-04-13 02:35:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:35:16.504391 | orchestrator | 2026-04-13 02:35:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:35:19.559350 | orchestrator | 2026-04-13 02:35:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:35:19.561760 | orchestrator | 2026-04-13 02:35:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:35:19.561918 | orchestrator | 2026-04-13 02:35:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:35:22.605935 | orchestrator | 2026-04-13 02:35:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:35:22.607441 | orchestrator | 2026-04-13 02:35:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:35:22.607479 | orchestrator | 2026-04-13 02:35:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:35:25.657046 | orchestrator | 2026-04-13 02:35:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:35:25.658948 | orchestrator | 2026-04-13 02:35:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:35:25.659012 | orchestrator | 2026-04-13 02:35:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:35:28.708498 | orchestrator | 2026-04-13 02:35:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:35:28.709416 | orchestrator | 2026-04-13 02:35:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:35:28.711820 | orchestrator | 2026-04-13 02:35:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:35:31.756819 | orchestrator | 2026-04-13 02:35:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:35:31.758430 | orchestrator | 2026-04-13 02:35:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:35:31.758472 | orchestrator | 2026-04-13 02:35:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:35:34.812034 | orchestrator | 2026-04-13 02:35:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:35:34.814182 | orchestrator | 2026-04-13 02:35:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:35:34.814303 | orchestrator | 2026-04-13 02:35:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:35:37.869688 | orchestrator | 2026-04-13 02:35:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:35:37.872835 | orchestrator | 2026-04-13 02:35:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:35:37.872868 | orchestrator | 2026-04-13 02:35:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:35:40.922327 | orchestrator | 2026-04-13 02:35:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:35:40.924263 | orchestrator | 2026-04-13 02:35:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:35:40.924324 | orchestrator | 2026-04-13 02:35:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:35:43.973845 | orchestrator | 2026-04-13 02:35:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:35:43.975829 | orchestrator | 2026-04-13 02:35:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:35:43.975983 | orchestrator | 2026-04-13 02:35:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:35:47.022105 | orchestrator | 2026-04-13 02:35:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:35:47.023686 | orchestrator | 2026-04-13 02:35:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:35:47.023732 | orchestrator | 2026-04-13 02:35:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:35:50.068498 | orchestrator | 2026-04-13 02:35:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:35:50.070507 | orchestrator | 2026-04-13 02:35:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:35:50.070861 | orchestrator | 2026-04-13 02:35:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:35:53.127234 | orchestrator | 2026-04-13 02:35:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:35:53.129804 | orchestrator | 2026-04-13 02:35:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:35:53.129874 | orchestrator | 2026-04-13 02:35:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:35:56.175311 | orchestrator | 2026-04-13 02:35:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:35:56.176956 | orchestrator | 2026-04-13 02:35:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:35:56.177029 | orchestrator | 2026-04-13 02:35:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:35:59.219142 | orchestrator | 2026-04-13 02:35:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:35:59.219860 | orchestrator | 2026-04-13 02:35:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:35:59.219892 | orchestrator | 2026-04-13 02:35:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:36:02.259367 | orchestrator | 2026-04-13 02:36:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:36:02.261189 | orchestrator | 2026-04-13 02:36:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:36:02.261260 | orchestrator | 2026-04-13 02:36:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:36:05.315875 | orchestrator | 2026-04-13 02:36:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:36:05.317298 | orchestrator | 2026-04-13 02:36:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:36:05.317357 | orchestrator | 2026-04-13 02:36:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:36:08.365273 | orchestrator | 2026-04-13 02:36:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:36:08.367254 | orchestrator | 2026-04-13 02:36:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:36:08.367311 | orchestrator | 2026-04-13 02:36:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:36:11.416076 | orchestrator | 2026-04-13 02:36:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:36:11.418295 | orchestrator | 2026-04-13 02:36:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:36:11.418373 | orchestrator | 2026-04-13 02:36:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:36:14.466811 | orchestrator | 2026-04-13 02:36:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:36:14.468624 | orchestrator | 2026-04-13 02:36:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:36:14.468730 | orchestrator | 2026-04-13 02:36:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:36:17.520843 | orchestrator | 2026-04-13 02:36:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:36:17.522340 | orchestrator | 2026-04-13 02:36:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:36:17.522392 | orchestrator | 2026-04-13 02:36:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:36:20.571271 | orchestrator | 2026-04-13 02:36:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:36:20.573684 | orchestrator | 2026-04-13 02:36:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:36:20.573710 | orchestrator | 2026-04-13 02:36:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:36:23.624889 | orchestrator | 2026-04-13 02:36:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:36:23.627006 | orchestrator | 2026-04-13 02:36:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:36:23.627059 | orchestrator | 2026-04-13 02:36:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:36:26.676086 | orchestrator | 2026-04-13 02:36:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:36:26.678745 | orchestrator | 2026-04-13 02:36:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:36:26.678805 | orchestrator | 2026-04-13 02:36:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:36:29.727457 | orchestrator | 2026-04-13 02:36:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:36:29.728984 | orchestrator | 2026-04-13 02:36:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:36:29.729027 | orchestrator | 2026-04-13 02:36:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:36:32.778585 | orchestrator | 2026-04-13 02:36:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:36:32.781428 | orchestrator | 2026-04-13 02:36:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:36:32.781491 | orchestrator | 2026-04-13 02:36:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:36:35.833502 | orchestrator | 2026-04-13 02:36:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:36:35.835074 | orchestrator | 2026-04-13 02:36:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:36:35.835142 | orchestrator | 2026-04-13 02:36:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:36:38.880734 | orchestrator | 2026-04-13 02:36:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:36:38.881895 | orchestrator | 2026-04-13 02:36:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:36:38.881925 | orchestrator | 2026-04-13 02:36:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:36:41.927722 | orchestrator | 2026-04-13 02:36:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:36:41.929865 | orchestrator | 2026-04-13 02:36:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:36:41.930085 | orchestrator | 2026-04-13 02:36:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:36:44.978368 | orchestrator | 2026-04-13 02:36:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:36:44.980032 | orchestrator | 2026-04-13 02:36:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:36:44.980070 | orchestrator | 2026-04-13 02:36:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:36:48.035347 | orchestrator | 2026-04-13 02:36:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:36:48.037351 | orchestrator | 2026-04-13 02:36:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:36:48.037394 | orchestrator | 2026-04-13 02:36:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:36:51.096305 | orchestrator | 2026-04-13 02:36:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:36:51.100225 | orchestrator | 2026-04-13 02:36:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:36:51.100313 | orchestrator | 2026-04-13 02:36:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:36:54.152617 | orchestrator | 2026-04-13 02:36:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:36:54.154197 | orchestrator | 2026-04-13 02:36:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:36:54.154380 | orchestrator | 2026-04-13 02:36:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:36:57.212680 | orchestrator | 2026-04-13 02:36:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:36:57.214660 | orchestrator | 2026-04-13 02:36:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:36:57.214718 | orchestrator | 2026-04-13 02:36:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:37:00.264971 | orchestrator | 2026-04-13 02:37:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:37:00.267447 | orchestrator | 2026-04-13 02:37:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:37:00.267626 | orchestrator | 2026-04-13 02:37:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:37:03.316864 | orchestrator | 2026-04-13 02:37:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:37:03.318160 | orchestrator | 2026-04-13 02:37:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:37:03.318206 | orchestrator | 2026-04-13 02:37:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:37:06.371556 | orchestrator | 2026-04-13 02:37:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:37:06.372459 | orchestrator | 2026-04-13 02:37:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:37:06.372497 | orchestrator | 2026-04-13 02:37:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:37:09.424772 | orchestrator | 2026-04-13 02:37:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:37:09.427046 | orchestrator | 2026-04-13 02:37:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:37:09.427093 | orchestrator | 2026-04-13 02:37:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:37:12.478368 | orchestrator | 2026-04-13 02:37:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:37:12.479671 | orchestrator | 2026-04-13 02:37:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:37:12.479705 | orchestrator | 2026-04-13 02:37:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:37:15.532558 | orchestrator | 2026-04-13 02:37:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:37:15.532676 | orchestrator | 2026-04-13 02:37:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:37:15.532693 | orchestrator | 2026-04-13 02:37:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:37:18.581736 | orchestrator | 2026-04-13 02:37:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:37:18.584108 | orchestrator | 2026-04-13 02:37:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:37:18.584142 | orchestrator | 2026-04-13 02:37:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:37:21.631022 | orchestrator | 2026-04-13 02:37:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:37:21.631413 | orchestrator | 2026-04-13 02:37:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:37:21.631468 | orchestrator | 2026-04-13 02:37:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:37:24.683790 | orchestrator | 2026-04-13 02:37:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:37:24.684807 | orchestrator | 2026-04-13 02:37:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:37:24.684850 | orchestrator | 2026-04-13 02:37:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:37:27.737775 | orchestrator | 2026-04-13 02:37:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:37:27.738951 | orchestrator | 2026-04-13 02:37:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:37:27.739006 | orchestrator | 2026-04-13 02:37:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:37:30.786767 | orchestrator | 2026-04-13 02:37:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:37:30.788972 | orchestrator | 2026-04-13 02:37:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:37:30.789051 | orchestrator | 2026-04-13 02:37:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:37:33.837865 | orchestrator | 2026-04-13 02:37:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:37:33.840171 | orchestrator | 2026-04-13 02:37:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:37:33.840284 | orchestrator | 2026-04-13 02:37:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:37:36.887323 | orchestrator | 2026-04-13 02:37:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:37:36.888805 | orchestrator | 2026-04-13 02:37:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:37:36.888953 | orchestrator | 2026-04-13 02:37:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:37:39.940312 | orchestrator | 2026-04-13 02:37:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:37:39.942820 | orchestrator | 2026-04-13 02:37:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:37:39.942902 | orchestrator | 2026-04-13 02:37:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:37:42.994005 | orchestrator | 2026-04-13 02:37:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:37:42.996224 | orchestrator | 2026-04-13 02:37:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:37:42.996269 | orchestrator | 2026-04-13 02:37:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:37:46.056104 | orchestrator | 2026-04-13 02:37:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:37:46.057584 | orchestrator | 2026-04-13 02:37:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:37:46.057721 | orchestrator | 2026-04-13 02:37:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:37:49.115040 | orchestrator | 2026-04-13 02:37:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:37:49.116167 | orchestrator | 2026-04-13 02:37:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:37:49.116217 | orchestrator | 2026-04-13 02:37:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:37:52.163151 | orchestrator | 2026-04-13 02:37:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:37:52.164359 | orchestrator | 2026-04-13 02:37:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:37:52.164426 | orchestrator | 2026-04-13 02:37:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:37:55.219092 | orchestrator | 2026-04-13 02:37:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:37:55.221331 | orchestrator | 2026-04-13 02:37:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:37:55.221461 | orchestrator | 2026-04-13 02:37:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:37:58.263372 | orchestrator | 2026-04-13 02:37:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:37:58.264836 | orchestrator | 2026-04-13 02:37:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:37:58.264869 | orchestrator | 2026-04-13 02:37:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:38:01.314900 | orchestrator | 2026-04-13 02:38:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:38:01.318616 | orchestrator | 2026-04-13 02:38:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:38:01.319764 | orchestrator | 2026-04-13 02:38:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:38:04.381372 | orchestrator | 2026-04-13 02:38:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:38:04.381635 | orchestrator | 2026-04-13 02:38:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:38:04.381747 | orchestrator | 2026-04-13 02:38:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:38:07.433979 | orchestrator | 2026-04-13 02:38:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:38:07.436543 | orchestrator | 2026-04-13 02:38:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:38:07.436590 | orchestrator | 2026-04-13 02:38:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:38:10.482547 | orchestrator | 2026-04-13 02:38:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:38:10.485534 | orchestrator | 2026-04-13 02:38:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:38:10.487270 | orchestrator | 2026-04-13 02:38:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:38:13.533630 | orchestrator | 2026-04-13 02:38:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:38:13.535345 | orchestrator | 2026-04-13 02:38:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:38:13.535459 | orchestrator | 2026-04-13 02:38:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:38:16.586613 | orchestrator | 2026-04-13 02:38:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:38:16.589084 | orchestrator | 2026-04-13 02:38:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:38:16.589140 | orchestrator | 2026-04-13 02:38:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:38:19.643317 | orchestrator | 2026-04-13 02:38:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:38:19.645205 | orchestrator | 2026-04-13 02:38:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:38:19.645241 | orchestrator | 2026-04-13 02:38:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:38:22.695088 | orchestrator | 2026-04-13 02:38:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:38:22.696371 | orchestrator | 2026-04-13 02:38:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:38:22.696752 | orchestrator | 2026-04-13 02:38:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:38:25.747794 | orchestrator | 2026-04-13 02:38:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:38:25.749632 | orchestrator | 2026-04-13 02:38:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:38:25.749779 | orchestrator | 2026-04-13 02:38:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:38:28.793598 | orchestrator | 2026-04-13 02:38:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:38:28.794138 | orchestrator | 2026-04-13 02:38:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:38:28.794187 | orchestrator | 2026-04-13 02:38:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:38:31.846871 | orchestrator | 2026-04-13 02:38:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:38:31.848352 | orchestrator | 2026-04-13 02:38:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:38:31.848410 | orchestrator | 2026-04-13 02:38:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:38:34.896703 | orchestrator | 2026-04-13 02:38:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:38:34.899994 | orchestrator | 2026-04-13 02:38:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:38:34.900097 | orchestrator | 2026-04-13 02:38:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:38:37.949677 | orchestrator | 2026-04-13 02:38:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:38:37.951116 | orchestrator | 2026-04-13 02:38:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:38:37.951151 | orchestrator | 2026-04-13 02:38:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:38:40.997988 | orchestrator | 2026-04-13 02:38:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:38:41.000312 | orchestrator | 2026-04-13 02:38:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:38:41.000370 | orchestrator | 2026-04-13 02:38:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:38:44.043100 | orchestrator | 2026-04-13 02:38:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:38:44.045150 | orchestrator | 2026-04-13 02:38:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:38:44.045186 | orchestrator | 2026-04-13 02:38:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:38:47.092790 | orchestrator | 2026-04-13 02:38:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:38:47.094266 | orchestrator | 2026-04-13 02:38:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:38:47.094324 | orchestrator | 2026-04-13 02:38:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:38:50.147395 | orchestrator | 2026-04-13 02:38:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:38:50.149828 | orchestrator | 2026-04-13 02:38:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:38:50.149900 | orchestrator | 2026-04-13 02:38:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:38:53.194241 | orchestrator | 2026-04-13 02:38:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:38:53.195982 | orchestrator | 2026-04-13 02:38:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:38:53.196038 | orchestrator | 2026-04-13 02:38:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:38:56.242673 | orchestrator | 2026-04-13 02:38:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:38:56.244715 | orchestrator | 2026-04-13 02:38:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:38:56.244842 | orchestrator | 2026-04-13 02:38:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:38:59.295307 | orchestrator | 2026-04-13 02:38:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:38:59.297850 | orchestrator | 2026-04-13 02:38:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:38:59.297947 | orchestrator | 2026-04-13 02:38:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:39:02.343996 | orchestrator | 2026-04-13 02:39:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:39:02.345661 | orchestrator | 2026-04-13 02:39:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:39:02.345753 | orchestrator | 2026-04-13 02:39:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:39:05.393034 | orchestrator | 2026-04-13 02:39:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:39:05.397219 | orchestrator | 2026-04-13 02:39:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:39:05.397257 | orchestrator | 2026-04-13 02:39:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:39:08.452885 | orchestrator | 2026-04-13 02:39:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:39:08.454678 | orchestrator | 2026-04-13 02:39:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:39:08.454742 | orchestrator | 2026-04-13 02:39:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:39:11.511922 | orchestrator | 2026-04-13 02:39:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:39:11.514096 | orchestrator | 2026-04-13 02:39:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:39:11.514131 | orchestrator | 2026-04-13 02:39:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:39:14.560626 | orchestrator | 2026-04-13 02:39:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:39:14.562525 | orchestrator | 2026-04-13 02:39:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:39:14.562679 | orchestrator | 2026-04-13 02:39:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:39:17.615671 | orchestrator | 2026-04-13 02:39:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:39:17.618554 | orchestrator | 2026-04-13 02:39:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:39:17.618703 | orchestrator | 2026-04-13 02:39:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:39:20.673685 | orchestrator | 2026-04-13 02:39:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:39:20.676322 | orchestrator | 2026-04-13 02:39:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:39:20.676386 | orchestrator | 2026-04-13 02:39:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:39:23.733315 | orchestrator | 2026-04-13 02:39:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:39:23.734929 | orchestrator | 2026-04-13 02:39:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:39:23.735021 | orchestrator | 2026-04-13 02:39:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:39:26.792656 | orchestrator | 2026-04-13 02:39:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:39:26.795186 | orchestrator | 2026-04-13 02:39:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:39:26.795268 | orchestrator | 2026-04-13 02:39:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:39:29.846238 | orchestrator | 2026-04-13 02:39:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:39:29.847559 | orchestrator | 2026-04-13 02:39:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:39:29.847599 | orchestrator | 2026-04-13 02:39:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:39:32.902645 | orchestrator | 2026-04-13 02:39:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:39:32.906800 | orchestrator | 2026-04-13 02:39:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:39:32.906862 | orchestrator | 2026-04-13 02:39:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:39:35.957912 | orchestrator | 2026-04-13 02:39:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:39:35.963003 | orchestrator | 2026-04-13 02:39:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:39:35.963070 | orchestrator | 2026-04-13 02:39:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:39:39.017411 | orchestrator | 2026-04-13 02:39:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:39:39.020491 | orchestrator | 2026-04-13 02:39:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:39:39.020586 | orchestrator | 2026-04-13 02:39:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:39:42.063497 | orchestrator | 2026-04-13 02:39:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:39:42.066112 | orchestrator | 2026-04-13 02:39:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:39:42.066180 | orchestrator | 2026-04-13 02:39:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:39:45.115423 | orchestrator | 2026-04-13 02:39:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:39:45.116982 | orchestrator | 2026-04-13 02:39:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:39:45.117037 | orchestrator | 2026-04-13 02:39:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:39:48.168272 | orchestrator | 2026-04-13 02:39:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:39:48.170157 | orchestrator | 2026-04-13 02:39:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:39:48.170198 | orchestrator | 2026-04-13 02:39:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:39:51.220022 | orchestrator | 2026-04-13 02:39:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:39:51.221376 | orchestrator | 2026-04-13 02:39:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:39:51.221411 | orchestrator | 2026-04-13 02:39:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:39:54.278112 | orchestrator | 2026-04-13 02:39:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:39:54.279436 | orchestrator | 2026-04-13 02:39:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:39:54.279481 | orchestrator | 2026-04-13 02:39:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:39:57.331966 | orchestrator | 2026-04-13 02:39:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:39:57.333191 | orchestrator | 2026-04-13 02:39:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:39:57.333248 | orchestrator | 2026-04-13 02:39:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:40:00.376700 | orchestrator | 2026-04-13 02:40:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:40:00.378887 | orchestrator | 2026-04-13 02:40:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:40:00.378969 | orchestrator | 2026-04-13 02:40:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:40:03.416678 | orchestrator | 2026-04-13 02:40:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:40:03.417799 | orchestrator | 2026-04-13 02:40:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:40:03.417848 | orchestrator | 2026-04-13 02:40:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:40:06.468041 | orchestrator | 2026-04-13 02:40:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:40:06.468949 | orchestrator | 2026-04-13 02:40:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:40:06.469035 | orchestrator | 2026-04-13 02:40:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:40:09.526423 | orchestrator | 2026-04-13 02:40:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:40:09.528353 | orchestrator | 2026-04-13 02:40:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:40:09.528411 | orchestrator | 2026-04-13 02:40:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:40:12.575684 | orchestrator | 2026-04-13 02:40:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:40:12.579219 | orchestrator | 2026-04-13 02:40:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:40:12.579566 | orchestrator | 2026-04-13 02:40:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:40:15.628888 | orchestrator | 2026-04-13 02:40:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:40:15.630251 | orchestrator | 2026-04-13 02:40:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:40:15.630301 | orchestrator | 2026-04-13 02:40:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:40:18.679200 | orchestrator | 2026-04-13 02:40:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:40:18.679549 | orchestrator | 2026-04-13 02:40:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:40:18.679581 | orchestrator | 2026-04-13 02:40:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:40:21.728110 | orchestrator | 2026-04-13 02:40:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:40:21.730757 | orchestrator | 2026-04-13 02:40:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:40:21.730828 | orchestrator | 2026-04-13 02:40:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:40:24.778499 | orchestrator | 2026-04-13 02:40:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:40:24.780391 | orchestrator | 2026-04-13 02:40:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:40:24.780448 | orchestrator | 2026-04-13 02:40:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:40:27.827954 | orchestrator | 2026-04-13 02:40:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:40:27.829152 | orchestrator | 2026-04-13 02:40:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:40:27.829284 | orchestrator | 2026-04-13 02:40:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:40:30.877341 | orchestrator | 2026-04-13 02:40:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:40:30.880752 | orchestrator | 2026-04-13 02:40:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:40:30.880833 | orchestrator | 2026-04-13 02:40:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:40:33.926789 | orchestrator | 2026-04-13 02:40:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:40:33.927192 | orchestrator | 2026-04-13 02:40:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:40:33.927226 | orchestrator | 2026-04-13 02:40:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:40:36.972324 | orchestrator | 2026-04-13 02:40:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:40:36.972772 | orchestrator | 2026-04-13 02:40:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:40:36.972804 | orchestrator | 2026-04-13 02:40:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:40:40.023070 | orchestrator | 2026-04-13 02:40:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:40:40.025404 | orchestrator | 2026-04-13 02:40:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:40:40.025445 | orchestrator | 2026-04-13 02:40:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:40:43.078410 | orchestrator | 2026-04-13 02:40:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:40:43.081232 | orchestrator | 2026-04-13 02:40:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:40:43.081284 | orchestrator | 2026-04-13 02:40:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:40:46.133140 | orchestrator | 2026-04-13 02:40:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:40:46.135236 | orchestrator | 2026-04-13 02:40:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:40:46.135294 | orchestrator | 2026-04-13 02:40:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:40:49.191271 | orchestrator | 2026-04-13 02:40:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:40:49.191520 | orchestrator | 2026-04-13 02:40:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:40:49.191547 | orchestrator | 2026-04-13 02:40:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:40:52.246164 | orchestrator | 2026-04-13 02:40:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:40:52.247509 | orchestrator | 2026-04-13 02:40:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:40:52.247539 | orchestrator | 2026-04-13 02:40:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:40:55.295123 | orchestrator | 2026-04-13 02:40:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:40:55.296709 | orchestrator | 2026-04-13 02:40:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:40:55.296771 | orchestrator | 2026-04-13 02:40:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:40:58.350305 | orchestrator | 2026-04-13 02:40:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:40:58.352601 | orchestrator | 2026-04-13 02:40:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:40:58.352682 | orchestrator | 2026-04-13 02:40:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:41:01.395565 | orchestrator | 2026-04-13 02:41:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:41:01.398232 | orchestrator | 2026-04-13 02:41:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:41:01.398521 | orchestrator | 2026-04-13 02:41:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:41:04.447074 | orchestrator | 2026-04-13 02:41:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:41:04.450314 | orchestrator | 2026-04-13 02:41:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:41:04.450382 | orchestrator | 2026-04-13 02:41:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:41:07.498435 | orchestrator | 2026-04-13 02:41:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:41:07.499957 | orchestrator | 2026-04-13 02:41:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:41:07.500011 | orchestrator | 2026-04-13 02:41:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:41:10.551823 | orchestrator | 2026-04-13 02:41:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:41:10.552979 | orchestrator | 2026-04-13 02:41:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:41:10.553036 | orchestrator | 2026-04-13 02:41:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:41:13.609285 | orchestrator | 2026-04-13 02:41:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:41:13.609934 | orchestrator | 2026-04-13 02:41:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:41:13.609976 | orchestrator | 2026-04-13 02:41:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:41:16.663189 | orchestrator | 2026-04-13 02:41:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:41:16.665047 | orchestrator | 2026-04-13 02:41:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:41:16.665089 | orchestrator | 2026-04-13 02:41:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:41:19.718967 | orchestrator | 2026-04-13 02:41:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:41:19.720322 | orchestrator | 2026-04-13 02:41:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:41:19.720533 | orchestrator | 2026-04-13 02:41:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:41:22.769105 | orchestrator | 2026-04-13 02:41:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:41:22.771248 | orchestrator | 2026-04-13 02:41:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:41:22.771306 | orchestrator | 2026-04-13 02:41:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:41:25.826079 | orchestrator | 2026-04-13 02:41:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:41:25.829700 | orchestrator | 2026-04-13 02:41:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:41:25.829762 | orchestrator | 2026-04-13 02:41:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:41:28.873896 | orchestrator | 2026-04-13 02:41:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:41:28.876843 | orchestrator | 2026-04-13 02:41:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:41:28.876970 | orchestrator | 2026-04-13 02:41:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:41:31.925916 | orchestrator | 2026-04-13 02:41:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:41:31.926069 | orchestrator | 2026-04-13 02:41:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:41:31.926087 | orchestrator | 2026-04-13 02:41:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:41:34.980813 | orchestrator | 2026-04-13 02:41:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:41:34.983083 | orchestrator | 2026-04-13 02:41:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:41:34.983124 | orchestrator | 2026-04-13 02:41:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:41:38.033321 | orchestrator | 2026-04-13 02:41:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:41:38.036104 | orchestrator | 2026-04-13 02:41:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:41:38.036219 | orchestrator | 2026-04-13 02:41:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:41:41.086408 | orchestrator | 2026-04-13 02:41:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:41:41.088689 | orchestrator | 2026-04-13 02:41:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:41:41.088908 | orchestrator | 2026-04-13 02:41:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:41:44.142415 | orchestrator | 2026-04-13 02:41:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:41:44.145054 | orchestrator | 2026-04-13 02:41:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:41:44.145282 | orchestrator | 2026-04-13 02:41:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:41:47.194943 | orchestrator | 2026-04-13 02:41:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:41:47.195206 | orchestrator | 2026-04-13 02:41:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:41:47.195221 | orchestrator | 2026-04-13 02:41:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:41:50.247116 | orchestrator | 2026-04-13 02:41:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:41:50.248612 | orchestrator | 2026-04-13 02:41:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:41:50.248675 | orchestrator | 2026-04-13 02:41:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:41:53.296282 | orchestrator | 2026-04-13 02:41:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:41:53.298640 | orchestrator | 2026-04-13 02:41:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:41:53.298696 | orchestrator | 2026-04-13 02:41:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:41:56.344472 | orchestrator | 2026-04-13 02:41:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:41:56.346248 | orchestrator | 2026-04-13 02:41:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:41:56.346329 | orchestrator | 2026-04-13 02:41:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:41:59.397116 | orchestrator | 2026-04-13 02:41:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:41:59.400339 | orchestrator | 2026-04-13 02:41:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:41:59.400393 | orchestrator | 2026-04-13 02:41:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:42:02.450701 | orchestrator | 2026-04-13 02:42:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:42:02.452372 | orchestrator | 2026-04-13 02:42:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:42:02.452753 | orchestrator | 2026-04-13 02:42:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:42:05.501855 | orchestrator | 2026-04-13 02:42:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:42:05.504237 | orchestrator | 2026-04-13 02:42:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:42:05.504283 | orchestrator | 2026-04-13 02:42:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:42:08.555291 | orchestrator | 2026-04-13 02:42:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:42:08.555394 | orchestrator | 2026-04-13 02:42:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:42:08.555411 | orchestrator | 2026-04-13 02:42:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:42:11.604798 | orchestrator | 2026-04-13 02:42:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:42:11.606758 | orchestrator | 2026-04-13 02:42:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:42:11.606866 | orchestrator | 2026-04-13 02:42:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:42:14.664244 | orchestrator | 2026-04-13 02:42:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:42:14.665659 | orchestrator | 2026-04-13 02:42:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:42:14.665707 | orchestrator | 2026-04-13 02:42:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:42:17.714348 | orchestrator | 2026-04-13 02:42:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:42:17.715604 | orchestrator | 2026-04-13 02:42:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:42:17.715680 | orchestrator | 2026-04-13 02:42:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:42:20.765934 | orchestrator | 2026-04-13 02:42:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:42:20.767614 | orchestrator | 2026-04-13 02:42:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:42:20.767643 | orchestrator | 2026-04-13 02:42:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:42:23.823486 | orchestrator | 2026-04-13 02:42:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:42:23.826270 | orchestrator | 2026-04-13 02:42:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:42:23.826366 | orchestrator | 2026-04-13 02:42:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:42:26.879563 | orchestrator | 2026-04-13 02:42:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:42:26.880947 | orchestrator | 2026-04-13 02:42:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:42:26.880987 | orchestrator | 2026-04-13 02:42:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:42:29.941043 | orchestrator | 2026-04-13 02:42:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:42:29.941801 | orchestrator | 2026-04-13 02:42:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:42:29.941846 | orchestrator | 2026-04-13 02:42:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:42:32.990087 | orchestrator | 2026-04-13 02:42:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:42:32.991073 | orchestrator | 2026-04-13 02:42:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:42:32.991461 | orchestrator | 2026-04-13 02:42:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:42:36.044898 | orchestrator | 2026-04-13 02:42:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:42:36.046112 | orchestrator | 2026-04-13 02:42:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:42:36.046335 | orchestrator | 2026-04-13 02:42:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:42:39.093239 | orchestrator | 2026-04-13 02:42:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:42:39.095096 | orchestrator | 2026-04-13 02:42:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:42:39.095276 | orchestrator | 2026-04-13 02:42:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:42:42.146279 | orchestrator | 2026-04-13 02:42:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:42:42.148970 | orchestrator | 2026-04-13 02:42:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:42:42.149045 | orchestrator | 2026-04-13 02:42:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:42:45.206277 | orchestrator | 2026-04-13 02:42:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:42:45.208198 | orchestrator | 2026-04-13 02:42:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:42:45.208342 | orchestrator | 2026-04-13 02:42:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:42:48.260062 | orchestrator | 2026-04-13 02:42:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:42:48.261168 | orchestrator | 2026-04-13 02:42:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:42:48.261246 | orchestrator | 2026-04-13 02:42:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:42:51.312357 | orchestrator | 2026-04-13 02:42:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:42:51.314207 | orchestrator | 2026-04-13 02:42:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:42:51.314271 | orchestrator | 2026-04-13 02:42:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:42:54.371311 | orchestrator | 2026-04-13 02:42:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:42:54.372871 | orchestrator | 2026-04-13 02:42:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:42:54.372919 | orchestrator | 2026-04-13 02:42:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:42:57.433029 | orchestrator | 2026-04-13 02:42:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:42:57.434429 | orchestrator | 2026-04-13 02:42:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:42:57.434543 | orchestrator | 2026-04-13 02:42:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:43:00.484609 | orchestrator | 2026-04-13 02:43:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:43:00.485461 | orchestrator | 2026-04-13 02:43:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:43:00.485503 | orchestrator | 2026-04-13 02:43:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:43:03.524196 | orchestrator | 2026-04-13 02:43:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:43:03.524318 | orchestrator | 2026-04-13 02:43:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:43:03.524340 | orchestrator | 2026-04-13 02:43:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:43:06.574888 | orchestrator | 2026-04-13 02:43:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:43:06.576420 | orchestrator | 2026-04-13 02:43:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:43:06.576460 | orchestrator | 2026-04-13 02:43:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:43:09.628483 | orchestrator | 2026-04-13 02:43:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:43:09.631169 | orchestrator | 2026-04-13 02:43:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:43:09.631210 | orchestrator | 2026-04-13 02:43:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:43:12.675759 | orchestrator | 2026-04-13 02:43:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:43:12.678407 | orchestrator | 2026-04-13 02:43:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:43:12.678457 | orchestrator | 2026-04-13 02:43:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:43:15.734007 | orchestrator | 2026-04-13 02:43:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:43:15.735677 | orchestrator | 2026-04-13 02:43:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:43:15.735709 | orchestrator | 2026-04-13 02:43:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:43:18.781252 | orchestrator | 2026-04-13 02:43:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:43:18.782444 | orchestrator | 2026-04-13 02:43:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:43:18.782597 | orchestrator | 2026-04-13 02:43:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:43:21.833082 | orchestrator | 2026-04-13 02:43:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:43:21.835477 | orchestrator | 2026-04-13 02:43:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:43:21.835537 | orchestrator | 2026-04-13 02:43:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:43:24.888732 | orchestrator | 2026-04-13 02:43:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:43:24.890698 | orchestrator | 2026-04-13 02:43:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:43:24.890838 | orchestrator | 2026-04-13 02:43:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:43:27.943920 | orchestrator | 2026-04-13 02:43:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:43:27.946264 | orchestrator | 2026-04-13 02:43:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:43:27.946311 | orchestrator | 2026-04-13 02:43:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:43:31.002203 | orchestrator | 2026-04-13 02:43:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:43:31.006418 | orchestrator | 2026-04-13 02:43:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:43:31.006784 | orchestrator | 2026-04-13 02:43:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:43:34.049180 | orchestrator | 2026-04-13 02:43:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:43:34.049994 | orchestrator | 2026-04-13 02:43:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:43:34.050096 | orchestrator | 2026-04-13 02:43:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:43:37.101410 | orchestrator | 2026-04-13 02:43:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:43:37.103704 | orchestrator | 2026-04-13 02:43:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:43:37.103765 | orchestrator | 2026-04-13 02:43:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:43:40.154484 | orchestrator | 2026-04-13 02:43:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:43:40.156203 | orchestrator | 2026-04-13 02:43:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:43:40.156280 | orchestrator | 2026-04-13 02:43:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:43:43.198987 | orchestrator | 2026-04-13 02:43:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:43:43.200787 | orchestrator | 2026-04-13 02:43:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:43:43.200861 | orchestrator | 2026-04-13 02:43:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:43:46.258137 | orchestrator | 2026-04-13 02:43:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:43:46.259781 | orchestrator | 2026-04-13 02:43:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:43:46.259822 | orchestrator | 2026-04-13 02:43:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:43:49.307044 | orchestrator | 2026-04-13 02:43:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:43:49.308149 | orchestrator | 2026-04-13 02:43:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:43:49.308205 | orchestrator | 2026-04-13 02:43:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:43:52.361773 | orchestrator | 2026-04-13 02:43:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:43:52.364281 | orchestrator | 2026-04-13 02:43:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:43:52.364363 | orchestrator | 2026-04-13 02:43:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:43:55.412586 | orchestrator | 2026-04-13 02:43:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:43:55.414523 | orchestrator | 2026-04-13 02:43:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:43:55.414612 | orchestrator | 2026-04-13 02:43:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:43:58.463551 | orchestrator | 2026-04-13 02:43:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:43:58.464840 | orchestrator | 2026-04-13 02:43:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:43:58.464947 | orchestrator | 2026-04-13 02:43:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:44:01.514428 | orchestrator | 2026-04-13 02:44:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:44:01.516997 | orchestrator | 2026-04-13 02:44:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:44:01.517104 | orchestrator | 2026-04-13 02:44:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:44:04.564091 | orchestrator | 2026-04-13 02:44:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:44:04.565134 | orchestrator | 2026-04-13 02:44:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:44:04.565337 | orchestrator | 2026-04-13 02:44:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:44:07.617268 | orchestrator | 2026-04-13 02:44:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:44:07.619252 | orchestrator | 2026-04-13 02:44:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:44:07.619320 | orchestrator | 2026-04-13 02:44:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:44:10.672908 | orchestrator | 2026-04-13 02:44:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:44:10.674179 | orchestrator | 2026-04-13 02:44:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:44:10.674229 | orchestrator | 2026-04-13 02:44:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:44:13.723966 | orchestrator | 2026-04-13 02:44:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:44:13.725517 | orchestrator | 2026-04-13 02:44:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:44:13.725548 | orchestrator | 2026-04-13 02:44:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:44:16.779036 | orchestrator | 2026-04-13 02:44:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:44:16.782807 | orchestrator | 2026-04-13 02:44:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:44:16.782966 | orchestrator | 2026-04-13 02:44:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:44:19.830810 | orchestrator | 2026-04-13 02:44:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:44:19.832141 | orchestrator | 2026-04-13 02:44:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:44:19.832179 | orchestrator | 2026-04-13 02:44:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:44:22.883059 | orchestrator | 2026-04-13 02:44:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:44:22.885770 | orchestrator | 2026-04-13 02:44:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:44:22.885860 | orchestrator | 2026-04-13 02:44:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:44:25.928396 | orchestrator | 2026-04-13 02:44:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:44:25.929841 | orchestrator | 2026-04-13 02:44:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:44:25.929909 | orchestrator | 2026-04-13 02:44:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:44:28.978302 | orchestrator | 2026-04-13 02:44:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:44:28.979532 | orchestrator | 2026-04-13 02:44:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:44:28.979622 | orchestrator | 2026-04-13 02:44:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:44:32.034860 | orchestrator | 2026-04-13 02:44:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:44:32.037672 | orchestrator | 2026-04-13 02:44:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:44:32.037728 | orchestrator | 2026-04-13 02:44:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:44:35.094977 | orchestrator | 2026-04-13 02:44:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:44:35.096824 | orchestrator | 2026-04-13 02:44:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:44:35.096853 | orchestrator | 2026-04-13 02:44:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:44:38.151203 | orchestrator | 2026-04-13 02:44:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:44:38.152156 | orchestrator | 2026-04-13 02:44:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:44:38.152212 | orchestrator | 2026-04-13 02:44:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:44:41.205165 | orchestrator | 2026-04-13 02:44:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:44:41.207017 | orchestrator | 2026-04-13 02:44:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:44:41.207075 | orchestrator | 2026-04-13 02:44:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:44:44.256828 | orchestrator | 2026-04-13 02:44:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:44:44.260941 | orchestrator | 2026-04-13 02:44:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:44:44.261023 | orchestrator | 2026-04-13 02:44:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:44:47.315748 | orchestrator | 2026-04-13 02:44:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:44:47.319766 | orchestrator | 2026-04-13 02:44:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:44:47.319952 | orchestrator | 2026-04-13 02:44:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:44:50.375984 | orchestrator | 2026-04-13 02:44:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:44:50.378136 | orchestrator | 2026-04-13 02:44:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:44:50.378213 | orchestrator | 2026-04-13 02:44:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:44:53.435250 | orchestrator | 2026-04-13 02:44:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:44:53.438477 | orchestrator | 2026-04-13 02:44:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:44:53.438531 | orchestrator | 2026-04-13 02:44:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:44:56.495280 | orchestrator | 2026-04-13 02:44:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:44:56.497448 | orchestrator | 2026-04-13 02:44:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:44:56.497476 | orchestrator | 2026-04-13 02:44:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:44:59.548080 | orchestrator | 2026-04-13 02:44:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:44:59.548993 | orchestrator | 2026-04-13 02:44:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:44:59.549023 | orchestrator | 2026-04-13 02:44:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:45:02.601524 | orchestrator | 2026-04-13 02:45:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:45:02.603876 | orchestrator | 2026-04-13 02:45:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:45:02.604002 | orchestrator | 2026-04-13 02:45:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:45:05.665121 | orchestrator | 2026-04-13 02:45:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:45:05.667449 | orchestrator | 2026-04-13 02:45:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:45:05.667974 | orchestrator | 2026-04-13 02:45:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:45:08.716020 | orchestrator | 2026-04-13 02:45:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:45:08.717337 | orchestrator | 2026-04-13 02:45:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:45:08.717439 | orchestrator | 2026-04-13 02:45:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:45:11.765035 | orchestrator | 2026-04-13 02:45:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:45:11.766124 | orchestrator | 2026-04-13 02:45:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:45:11.766223 | orchestrator | 2026-04-13 02:45:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:45:14.817809 | orchestrator | 2026-04-13 02:45:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:45:14.820277 | orchestrator | 2026-04-13 02:45:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:45:14.820334 | orchestrator | 2026-04-13 02:45:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:45:17.868849 | orchestrator | 2026-04-13 02:45:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:45:17.870659 | orchestrator | 2026-04-13 02:45:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:45:17.870718 | orchestrator | 2026-04-13 02:45:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:45:20.919606 | orchestrator | 2026-04-13 02:45:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:45:20.922844 | orchestrator | 2026-04-13 02:45:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:45:20.923021 | orchestrator | 2026-04-13 02:45:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:45:23.966899 | orchestrator | 2026-04-13 02:45:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:45:23.968830 | orchestrator | 2026-04-13 02:45:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:45:23.968890 | orchestrator | 2026-04-13 02:45:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:45:27.013134 | orchestrator | 2026-04-13 02:45:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:45:27.015142 | orchestrator | 2026-04-13 02:45:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:45:27.015190 | orchestrator | 2026-04-13 02:45:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:45:30.072816 | orchestrator | 2026-04-13 02:45:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:45:30.074489 | orchestrator | 2026-04-13 02:45:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:45:30.074558 | orchestrator | 2026-04-13 02:45:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:45:33.123593 | orchestrator | 2026-04-13 02:45:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:45:33.126167 | orchestrator | 2026-04-13 02:45:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:45:33.126677 | orchestrator | 2026-04-13 02:45:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:45:36.184099 | orchestrator | 2026-04-13 02:45:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:45:36.186858 | orchestrator | 2026-04-13 02:45:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:45:36.186915 | orchestrator | 2026-04-13 02:45:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:45:39.244145 | orchestrator | 2026-04-13 02:45:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:45:39.245934 | orchestrator | 2026-04-13 02:45:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:45:39.246141 | orchestrator | 2026-04-13 02:45:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:45:42.296044 | orchestrator | 2026-04-13 02:45:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:45:42.297149 | orchestrator | 2026-04-13 02:45:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:45:42.297182 | orchestrator | 2026-04-13 02:45:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:45:45.354871 | orchestrator | 2026-04-13 02:45:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:45:45.357461 | orchestrator | 2026-04-13 02:45:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:45:45.357698 | orchestrator | 2026-04-13 02:45:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:45:48.406634 | orchestrator | 2026-04-13 02:45:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:45:48.409031 | orchestrator | 2026-04-13 02:45:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:45:48.409076 | orchestrator | 2026-04-13 02:45:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:45:51.459523 | orchestrator | 2026-04-13 02:45:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:45:51.460917 | orchestrator | 2026-04-13 02:45:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:45:51.461053 | orchestrator | 2026-04-13 02:45:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:45:54.513333 | orchestrator | 2026-04-13 02:45:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:45:54.514696 | orchestrator | 2026-04-13 02:45:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:45:54.515026 | orchestrator | 2026-04-13 02:45:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:45:57.566824 | orchestrator | 2026-04-13 02:45:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:45:57.568344 | orchestrator | 2026-04-13 02:45:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:45:57.568468 | orchestrator | 2026-04-13 02:45:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:46:00.618930 | orchestrator | 2026-04-13 02:46:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:46:00.620706 | orchestrator | 2026-04-13 02:46:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:46:00.620785 | orchestrator | 2026-04-13 02:46:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:46:03.670905 | orchestrator | 2026-04-13 02:46:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:46:03.672096 | orchestrator | 2026-04-13 02:46:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:46:03.672145 | orchestrator | 2026-04-13 02:46:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:46:06.726298 | orchestrator | 2026-04-13 02:46:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:46:06.727958 | orchestrator | 2026-04-13 02:46:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:46:06.728049 | orchestrator | 2026-04-13 02:46:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:46:09.772937 | orchestrator | 2026-04-13 02:46:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:46:09.775680 | orchestrator | 2026-04-13 02:46:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:46:09.775757 | orchestrator | 2026-04-13 02:46:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:46:12.824851 | orchestrator | 2026-04-13 02:46:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:46:12.827767 | orchestrator | 2026-04-13 02:46:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:46:12.827897 | orchestrator | 2026-04-13 02:46:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:46:15.876171 | orchestrator | 2026-04-13 02:46:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:46:15.878453 | orchestrator | 2026-04-13 02:46:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:46:15.878508 | orchestrator | 2026-04-13 02:46:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:46:18.929359 | orchestrator | 2026-04-13 02:46:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:46:18.935533 | orchestrator | 2026-04-13 02:46:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:46:18.935965 | orchestrator | 2026-04-13 02:46:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:46:21.984538 | orchestrator | 2026-04-13 02:46:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:46:21.984727 | orchestrator | 2026-04-13 02:46:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:46:21.984756 | orchestrator | 2026-04-13 02:46:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:46:25.039592 | orchestrator | 2026-04-13 02:46:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:46:25.042557 | orchestrator | 2026-04-13 02:46:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:46:25.042888 | orchestrator | 2026-04-13 02:46:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:46:28.097538 | orchestrator | 2026-04-13 02:46:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:46:28.099985 | orchestrator | 2026-04-13 02:46:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:46:28.100126 | orchestrator | 2026-04-13 02:46:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:46:31.151053 | orchestrator | 2026-04-13 02:46:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:46:31.152848 | orchestrator | 2026-04-13 02:46:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:46:31.152896 | orchestrator | 2026-04-13 02:46:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:46:34.202526 | orchestrator | 2026-04-13 02:46:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:46:34.205593 | orchestrator | 2026-04-13 02:46:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:46:34.205651 | orchestrator | 2026-04-13 02:46:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:46:37.253454 | orchestrator | 2026-04-13 02:46:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:46:37.254552 | orchestrator | 2026-04-13 02:46:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:46:37.254601 | orchestrator | 2026-04-13 02:46:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:46:40.308070 | orchestrator | 2026-04-13 02:46:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:46:40.310180 | orchestrator | 2026-04-13 02:46:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:46:40.310268 | orchestrator | 2026-04-13 02:46:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:46:43.356651 | orchestrator | 2026-04-13 02:46:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:46:43.358624 | orchestrator | 2026-04-13 02:46:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:46:43.358687 | orchestrator | 2026-04-13 02:46:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:46:46.409902 | orchestrator | 2026-04-13 02:46:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:46:46.412051 | orchestrator | 2026-04-13 02:46:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:46:46.412093 | orchestrator | 2026-04-13 02:46:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:46:49.456623 | orchestrator | 2026-04-13 02:46:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:46:49.459371 | orchestrator | 2026-04-13 02:46:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:46:49.462676 | orchestrator | 2026-04-13 02:46:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:46:52.506777 | orchestrator | 2026-04-13 02:46:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:46:52.509110 | orchestrator | 2026-04-13 02:46:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:46:52.509254 | orchestrator | 2026-04-13 02:46:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:46:55.565472 | orchestrator | 2026-04-13 02:46:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:46:55.567160 | orchestrator | 2026-04-13 02:46:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:46:55.567208 | orchestrator | 2026-04-13 02:46:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:46:58.610389 | orchestrator | 2026-04-13 02:46:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:46:58.610612 | orchestrator | 2026-04-13 02:46:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:46:58.610638 | orchestrator | 2026-04-13 02:46:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:47:01.668201 | orchestrator | 2026-04-13 02:47:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:47:01.669694 | orchestrator | 2026-04-13 02:47:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:47:01.669862 | orchestrator | 2026-04-13 02:47:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:47:04.720251 | orchestrator | 2026-04-13 02:47:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:47:04.721407 | orchestrator | 2026-04-13 02:47:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:47:04.721511 | orchestrator | 2026-04-13 02:47:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:47:07.774367 | orchestrator | 2026-04-13 02:47:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:47:07.776459 | orchestrator | 2026-04-13 02:47:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:47:07.776501 | orchestrator | 2026-04-13 02:47:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:47:10.830988 | orchestrator | 2026-04-13 02:47:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:47:10.832258 | orchestrator | 2026-04-13 02:47:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:47:10.832293 | orchestrator | 2026-04-13 02:47:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:47:13.879524 | orchestrator | 2026-04-13 02:47:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:47:13.880968 | orchestrator | 2026-04-13 02:47:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:47:13.881011 | orchestrator | 2026-04-13 02:47:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:47:16.920026 | orchestrator | 2026-04-13 02:47:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:47:16.921747 | orchestrator | 2026-04-13 02:47:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:47:16.921847 | orchestrator | 2026-04-13 02:47:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:47:19.967553 | orchestrator | 2026-04-13 02:47:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:47:19.968827 | orchestrator | 2026-04-13 02:47:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:47:19.968993 | orchestrator | 2026-04-13 02:47:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:47:23.019684 | orchestrator | 2026-04-13 02:47:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:47:23.023134 | orchestrator | 2026-04-13 02:47:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:47:23.023216 | orchestrator | 2026-04-13 02:47:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:47:26.079330 | orchestrator | 2026-04-13 02:47:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:47:26.081430 | orchestrator | 2026-04-13 02:47:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:47:26.081577 | orchestrator | 2026-04-13 02:47:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:47:29.127008 | orchestrator | 2026-04-13 02:47:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:47:29.128556 | orchestrator | 2026-04-13 02:47:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:47:29.128615 | orchestrator | 2026-04-13 02:47:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:47:32.182550 | orchestrator | 2026-04-13 02:47:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:47:32.184465 | orchestrator | 2026-04-13 02:47:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:47:32.184896 | orchestrator | 2026-04-13 02:47:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:47:35.235481 | orchestrator | 2026-04-13 02:47:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:47:35.236612 | orchestrator | 2026-04-13 02:47:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:47:35.236663 | orchestrator | 2026-04-13 02:47:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:47:38.289528 | orchestrator | 2026-04-13 02:47:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:47:38.290264 | orchestrator | 2026-04-13 02:47:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:47:38.290298 | orchestrator | 2026-04-13 02:47:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:47:41.343824 | orchestrator | 2026-04-13 02:47:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:47:41.347091 | orchestrator | 2026-04-13 02:47:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:47:41.347608 | orchestrator | 2026-04-13 02:47:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:47:44.397975 | orchestrator | 2026-04-13 02:47:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:47:44.400288 | orchestrator | 2026-04-13 02:47:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:47:44.400335 | orchestrator | 2026-04-13 02:47:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:47:47.451882 | orchestrator | 2026-04-13 02:47:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:47:47.453913 | orchestrator | 2026-04-13 02:47:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:47:47.454012 | orchestrator | 2026-04-13 02:47:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:47:50.512271 | orchestrator | 2026-04-13 02:47:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:47:50.515729 | orchestrator | 2026-04-13 02:47:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:47:50.515798 | orchestrator | 2026-04-13 02:47:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:47:53.563541 | orchestrator | 2026-04-13 02:47:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:47:53.563632 | orchestrator | 2026-04-13 02:47:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:47:53.563647 | orchestrator | 2026-04-13 02:47:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:47:56.618309 | orchestrator | 2026-04-13 02:47:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:47:56.620958 | orchestrator | 2026-04-13 02:47:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:47:56.621014 | orchestrator | 2026-04-13 02:47:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:47:59.667692 | orchestrator | 2026-04-13 02:47:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:47:59.669522 | orchestrator | 2026-04-13 02:47:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:47:59.669566 | orchestrator | 2026-04-13 02:47:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:48:02.718583 | orchestrator | 2026-04-13 02:48:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:48:02.720770 | orchestrator | 2026-04-13 02:48:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:48:02.720839 | orchestrator | 2026-04-13 02:48:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:48:05.757879 | orchestrator | 2026-04-13 02:48:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:48:05.759794 | orchestrator | 2026-04-13 02:48:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:48:05.759858 | orchestrator | 2026-04-13 02:48:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:48:08.807412 | orchestrator | 2026-04-13 02:48:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:48:08.809705 | orchestrator | 2026-04-13 02:48:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:48:08.809786 | orchestrator | 2026-04-13 02:48:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:48:11.862742 | orchestrator | 2026-04-13 02:48:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:48:11.864440 | orchestrator | 2026-04-13 02:48:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:48:11.864585 | orchestrator | 2026-04-13 02:48:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:48:14.916160 | orchestrator | 2026-04-13 02:48:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:48:14.917077 | orchestrator | 2026-04-13 02:48:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:48:14.917283 | orchestrator | 2026-04-13 02:48:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:48:17.966559 | orchestrator | 2026-04-13 02:48:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:48:17.968474 | orchestrator | 2026-04-13 02:48:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:48:17.968534 | orchestrator | 2026-04-13 02:48:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:48:21.022967 | orchestrator | 2026-04-13 02:48:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:48:21.025708 | orchestrator | 2026-04-13 02:48:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:48:21.025778 | orchestrator | 2026-04-13 02:48:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:48:24.063080 | orchestrator | 2026-04-13 02:48:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:48:24.064536 | orchestrator | 2026-04-13 02:48:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:48:24.064609 | orchestrator | 2026-04-13 02:48:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:48:27.124267 | orchestrator | 2026-04-13 02:48:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:48:27.126477 | orchestrator | 2026-04-13 02:48:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:48:27.126521 | orchestrator | 2026-04-13 02:48:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:48:30.175431 | orchestrator | 2026-04-13 02:48:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:48:30.176874 | orchestrator | 2026-04-13 02:48:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:48:30.176920 | orchestrator | 2026-04-13 02:48:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:48:33.215760 | orchestrator | 2026-04-13 02:48:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:48:33.217614 | orchestrator | 2026-04-13 02:48:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:48:33.217671 | orchestrator | 2026-04-13 02:48:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:48:36.269331 | orchestrator | 2026-04-13 02:48:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:48:36.271233 | orchestrator | 2026-04-13 02:48:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:48:36.271277 | orchestrator | 2026-04-13 02:48:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:48:39.320746 | orchestrator | 2026-04-13 02:48:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:48:39.322485 | orchestrator | 2026-04-13 02:48:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:48:39.322535 | orchestrator | 2026-04-13 02:48:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:48:42.367365 | orchestrator | 2026-04-13 02:48:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:48:42.369592 | orchestrator | 2026-04-13 02:48:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:48:42.369648 | orchestrator | 2026-04-13 02:48:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:48:45.417514 | orchestrator | 2026-04-13 02:48:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:48:45.419210 | orchestrator | 2026-04-13 02:48:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:48:45.419250 | orchestrator | 2026-04-13 02:48:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:48:48.461605 | orchestrator | 2026-04-13 02:48:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:48:48.462386 | orchestrator | 2026-04-13 02:48:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:48:48.462433 | orchestrator | 2026-04-13 02:48:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:48:51.515084 | orchestrator | 2026-04-13 02:48:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:48:51.516873 | orchestrator | 2026-04-13 02:48:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:48:51.516923 | orchestrator | 2026-04-13 02:48:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:48:54.561965 | orchestrator | 2026-04-13 02:48:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:48:54.562868 | orchestrator | 2026-04-13 02:48:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:48:54.562921 | orchestrator | 2026-04-13 02:48:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:48:57.611551 | orchestrator | 2026-04-13 02:48:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:48:57.612915 | orchestrator | 2026-04-13 02:48:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:48:57.612961 | orchestrator | 2026-04-13 02:48:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:49:00.663481 | orchestrator | 2026-04-13 02:49:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:49:00.665085 | orchestrator | 2026-04-13 02:49:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:49:00.665177 | orchestrator | 2026-04-13 02:49:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:49:03.714845 | orchestrator | 2026-04-13 02:49:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:49:03.719367 | orchestrator | 2026-04-13 02:49:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:49:03.719481 | orchestrator | 2026-04-13 02:49:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:49:06.755700 | orchestrator | 2026-04-13 02:49:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:49:06.757077 | orchestrator | 2026-04-13 02:49:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:49:06.757115 | orchestrator | 2026-04-13 02:49:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:49:09.808914 | orchestrator | 2026-04-13 02:49:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:49:09.810801 | orchestrator | 2026-04-13 02:49:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:49:09.810881 | orchestrator | 2026-04-13 02:49:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:49:12.858907 | orchestrator | 2026-04-13 02:49:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:49:12.861384 | orchestrator | 2026-04-13 02:49:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:49:12.861466 | orchestrator | 2026-04-13 02:49:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:49:15.915044 | orchestrator | 2026-04-13 02:49:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:49:15.916950 | orchestrator | 2026-04-13 02:49:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:49:15.916996 | orchestrator | 2026-04-13 02:49:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:49:18.965996 | orchestrator | 2026-04-13 02:49:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:49:18.968578 | orchestrator | 2026-04-13 02:49:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:49:18.968873 | orchestrator | 2026-04-13 02:49:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:49:22.028235 | orchestrator | 2026-04-13 02:49:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:49:22.029852 | orchestrator | 2026-04-13 02:49:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:49:22.029908 | orchestrator | 2026-04-13 02:49:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:49:25.079804 | orchestrator | 2026-04-13 02:49:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:49:25.081928 | orchestrator | 2026-04-13 02:49:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:49:25.081989 | orchestrator | 2026-04-13 02:49:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:49:28.138507 | orchestrator | 2026-04-13 02:49:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:49:28.140276 | orchestrator | 2026-04-13 02:49:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:49:28.140335 | orchestrator | 2026-04-13 02:49:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:49:31.195864 | orchestrator | 2026-04-13 02:49:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:49:31.196671 | orchestrator | 2026-04-13 02:49:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:49:31.196776 | orchestrator | 2026-04-13 02:49:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:49:34.243336 | orchestrator | 2026-04-13 02:49:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:49:34.245555 | orchestrator | 2026-04-13 02:49:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:49:34.245605 | orchestrator | 2026-04-13 02:49:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:49:37.293787 | orchestrator | 2026-04-13 02:49:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:49:37.295444 | orchestrator | 2026-04-13 02:49:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:49:37.295504 | orchestrator | 2026-04-13 02:49:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:49:40.346817 | orchestrator | 2026-04-13 02:49:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:49:40.347768 | orchestrator | 2026-04-13 02:49:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:49:40.347831 | orchestrator | 2026-04-13 02:49:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:49:43.406682 | orchestrator | 2026-04-13 02:49:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:49:43.407598 | orchestrator | 2026-04-13 02:49:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:49:43.407628 | orchestrator | 2026-04-13 02:49:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:49:46.460799 | orchestrator | 2026-04-13 02:49:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:49:46.462488 | orchestrator | 2026-04-13 02:49:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:49:46.462542 | orchestrator | 2026-04-13 02:49:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:49:49.508640 | orchestrator | 2026-04-13 02:49:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:49:49.511059 | orchestrator | 2026-04-13 02:49:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:49:49.511154 | orchestrator | 2026-04-13 02:49:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:49:52.562319 | orchestrator | 2026-04-13 02:49:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:49:52.563814 | orchestrator | 2026-04-13 02:49:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:49:52.563913 | orchestrator | 2026-04-13 02:49:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:49:55.611776 | orchestrator | 2026-04-13 02:49:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:49:55.613079 | orchestrator | 2026-04-13 02:49:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:49:55.613099 | orchestrator | 2026-04-13 02:49:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:49:58.664512 | orchestrator | 2026-04-13 02:49:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:49:58.666381 | orchestrator | 2026-04-13 02:49:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:49:58.666449 | orchestrator | 2026-04-13 02:49:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:50:01.724846 | orchestrator | 2026-04-13 02:50:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:50:01.727276 | orchestrator | 2026-04-13 02:50:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:50:01.727347 | orchestrator | 2026-04-13 02:50:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:50:04.775063 | orchestrator | 2026-04-13 02:50:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:50:04.778206 | orchestrator | 2026-04-13 02:50:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:50:04.778348 | orchestrator | 2026-04-13 02:50:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:50:07.825423 | orchestrator | 2026-04-13 02:50:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:50:07.827236 | orchestrator | 2026-04-13 02:50:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:50:07.827281 | orchestrator | 2026-04-13 02:50:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:50:10.874166 | orchestrator | 2026-04-13 02:50:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:50:10.875494 | orchestrator | 2026-04-13 02:50:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:50:10.875550 | orchestrator | 2026-04-13 02:50:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:50:13.924756 | orchestrator | 2026-04-13 02:50:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:50:13.926064 | orchestrator | 2026-04-13 02:50:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:50:13.926086 | orchestrator | 2026-04-13 02:50:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:50:16.977124 | orchestrator | 2026-04-13 02:50:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:50:16.979850 | orchestrator | 2026-04-13 02:50:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:50:16.979932 | orchestrator | 2026-04-13 02:50:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:50:20.025611 | orchestrator | 2026-04-13 02:50:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:50:20.026527 | orchestrator | 2026-04-13 02:50:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:50:20.026610 | orchestrator | 2026-04-13 02:50:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:50:23.072371 | orchestrator | 2026-04-13 02:50:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:50:23.073759 | orchestrator | 2026-04-13 02:50:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:50:23.073811 | orchestrator | 2026-04-13 02:50:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:50:26.126916 | orchestrator | 2026-04-13 02:50:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:50:26.129019 | orchestrator | 2026-04-13 02:50:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:50:26.129076 | orchestrator | 2026-04-13 02:50:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:50:29.178594 | orchestrator | 2026-04-13 02:50:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:50:29.181010 | orchestrator | 2026-04-13 02:50:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:50:29.181073 | orchestrator | 2026-04-13 02:50:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:50:32.235737 | orchestrator | 2026-04-13 02:50:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:50:32.236873 | orchestrator | 2026-04-13 02:50:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:50:32.236915 | orchestrator | 2026-04-13 02:50:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:50:35.286167 | orchestrator | 2026-04-13 02:50:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:50:35.288464 | orchestrator | 2026-04-13 02:50:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:50:35.288509 | orchestrator | 2026-04-13 02:50:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:50:38.334190 | orchestrator | 2026-04-13 02:50:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:50:38.335743 | orchestrator | 2026-04-13 02:50:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:50:38.336325 | orchestrator | 2026-04-13 02:50:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:50:41.381359 | orchestrator | 2026-04-13 02:50:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:50:41.383190 | orchestrator | 2026-04-13 02:50:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:50:41.383327 | orchestrator | 2026-04-13 02:50:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:50:44.435578 | orchestrator | 2026-04-13 02:50:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:50:44.435851 | orchestrator | 2026-04-13 02:50:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:50:44.435886 | orchestrator | 2026-04-13 02:50:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:50:47.490715 | orchestrator | 2026-04-13 02:50:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:50:47.491478 | orchestrator | 2026-04-13 02:50:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:50:47.491513 | orchestrator | 2026-04-13 02:50:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:50:50.547275 | orchestrator | 2026-04-13 02:50:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:50:50.548025 | orchestrator | 2026-04-13 02:50:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:50:50.548055 | orchestrator | 2026-04-13 02:50:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:50:53.602632 | orchestrator | 2026-04-13 02:50:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:50:53.605678 | orchestrator | 2026-04-13 02:50:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:50:53.605814 | orchestrator | 2026-04-13 02:50:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:50:56.656440 | orchestrator | 2026-04-13 02:50:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:50:56.660759 | orchestrator | 2026-04-13 02:50:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:50:56.661094 | orchestrator | 2026-04-13 02:50:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:50:59.712860 | orchestrator | 2026-04-13 02:50:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:50:59.713993 | orchestrator | 2026-04-13 02:50:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:50:59.714011 | orchestrator | 2026-04-13 02:50:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:51:02.769533 | orchestrator | 2026-04-13 02:51:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:51:02.772938 | orchestrator | 2026-04-13 02:51:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:51:02.772996 | orchestrator | 2026-04-13 02:51:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:51:05.829723 | orchestrator | 2026-04-13 02:51:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:51:05.832144 | orchestrator | 2026-04-13 02:51:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:51:05.832260 | orchestrator | 2026-04-13 02:51:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:51:08.886448 | orchestrator | 2026-04-13 02:51:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:51:08.887972 | orchestrator | 2026-04-13 02:51:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:51:08.888009 | orchestrator | 2026-04-13 02:51:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:51:11.939807 | orchestrator | 2026-04-13 02:51:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:51:11.943263 | orchestrator | 2026-04-13 02:51:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:51:11.943331 | orchestrator | 2026-04-13 02:51:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:51:14.991970 | orchestrator | 2026-04-13 02:51:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:51:14.993923 | orchestrator | 2026-04-13 02:51:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:51:14.994155 | orchestrator | 2026-04-13 02:51:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:51:18.047008 | orchestrator | 2026-04-13 02:51:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:51:18.048442 | orchestrator | 2026-04-13 02:51:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:51:18.048501 | orchestrator | 2026-04-13 02:51:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:51:21.091949 | orchestrator | 2026-04-13 02:51:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:51:21.092160 | orchestrator | 2026-04-13 02:51:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:51:21.092185 | orchestrator | 2026-04-13 02:51:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:51:24.139625 | orchestrator | 2026-04-13 02:51:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:51:24.140808 | orchestrator | 2026-04-13 02:51:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:51:24.140929 | orchestrator | 2026-04-13 02:51:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:51:27.190345 | orchestrator | 2026-04-13 02:51:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:51:27.190745 | orchestrator | 2026-04-13 02:51:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:51:27.190846 | orchestrator | 2026-04-13 02:51:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:51:30.238632 | orchestrator | 2026-04-13 02:51:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:51:30.239754 | orchestrator | 2026-04-13 02:51:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:51:30.239807 | orchestrator | 2026-04-13 02:51:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:51:33.292597 | orchestrator | 2026-04-13 02:51:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:51:33.294177 | orchestrator | 2026-04-13 02:51:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:51:33.294208 | orchestrator | 2026-04-13 02:51:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:51:36.349397 | orchestrator | 2026-04-13 02:51:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:51:36.350768 | orchestrator | 2026-04-13 02:51:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:51:36.350801 | orchestrator | 2026-04-13 02:51:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:51:39.400330 | orchestrator | 2026-04-13 02:51:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:51:39.403323 | orchestrator | 2026-04-13 02:51:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:51:39.403953 | orchestrator | 2026-04-13 02:51:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:51:42.452153 | orchestrator | 2026-04-13 02:51:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:51:42.454278 | orchestrator | 2026-04-13 02:51:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:51:42.455606 | orchestrator | 2026-04-13 02:51:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:51:45.509805 | orchestrator | 2026-04-13 02:51:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:51:45.511610 | orchestrator | 2026-04-13 02:51:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:51:45.511675 | orchestrator | 2026-04-13 02:51:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:51:48.564614 | orchestrator | 2026-04-13 02:51:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:51:48.566985 | orchestrator | 2026-04-13 02:51:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:51:48.567061 | orchestrator | 2026-04-13 02:51:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:51:51.618676 | orchestrator | 2026-04-13 02:51:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:51:51.620477 | orchestrator | 2026-04-13 02:51:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:51:51.620520 | orchestrator | 2026-04-13 02:51:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:51:54.672330 | orchestrator | 2026-04-13 02:51:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:51:54.674148 | orchestrator | 2026-04-13 02:51:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:51:54.674190 | orchestrator | 2026-04-13 02:51:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:51:57.721343 | orchestrator | 2026-04-13 02:51:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:51:57.722333 | orchestrator | 2026-04-13 02:51:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:51:57.722381 | orchestrator | 2026-04-13 02:51:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:52:00.769408 | orchestrator | 2026-04-13 02:52:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:52:00.770331 | orchestrator | 2026-04-13 02:52:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:52:00.770386 | orchestrator | 2026-04-13 02:52:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:52:03.825388 | orchestrator | 2026-04-13 02:52:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:52:03.828232 | orchestrator | 2026-04-13 02:52:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:52:03.828270 | orchestrator | 2026-04-13 02:52:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:52:06.873147 | orchestrator | 2026-04-13 02:52:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:52:06.873881 | orchestrator | 2026-04-13 02:52:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:52:06.873917 | orchestrator | 2026-04-13 02:52:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:52:09.922201 | orchestrator | 2026-04-13 02:52:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:52:09.924551 | orchestrator | 2026-04-13 02:52:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:52:09.925208 | orchestrator | 2026-04-13 02:52:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:52:12.972436 | orchestrator | 2026-04-13 02:52:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:52:12.975068 | orchestrator | 2026-04-13 02:52:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:52:12.975181 | orchestrator | 2026-04-13 02:52:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:52:16.023209 | orchestrator | 2026-04-13 02:52:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:52:16.024716 | orchestrator | 2026-04-13 02:52:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:52:16.024767 | orchestrator | 2026-04-13 02:52:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:52:19.077638 | orchestrator | 2026-04-13 02:52:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:52:19.078328 | orchestrator | 2026-04-13 02:52:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:52:19.078425 | orchestrator | 2026-04-13 02:52:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:52:22.130397 | orchestrator | 2026-04-13 02:52:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:52:22.132305 | orchestrator | 2026-04-13 02:52:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:52:22.132320 | orchestrator | 2026-04-13 02:52:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:52:25.178001 | orchestrator | 2026-04-13 02:52:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:52:25.181797 | orchestrator | 2026-04-13 02:52:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:52:25.182483 | orchestrator | 2026-04-13 02:52:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:52:28.235127 | orchestrator | 2026-04-13 02:52:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:52:28.238235 | orchestrator | 2026-04-13 02:52:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:52:28.238329 | orchestrator | 2026-04-13 02:52:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:52:31.288741 | orchestrator | 2026-04-13 02:52:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:52:31.291081 | orchestrator | 2026-04-13 02:52:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:52:31.291135 | orchestrator | 2026-04-13 02:52:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:52:34.347490 | orchestrator | 2026-04-13 02:52:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:52:34.348436 | orchestrator | 2026-04-13 02:52:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:52:34.348701 | orchestrator | 2026-04-13 02:52:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:52:37.401419 | orchestrator | 2026-04-13 02:52:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:52:37.403393 | orchestrator | 2026-04-13 02:52:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:52:37.403472 | orchestrator | 2026-04-13 02:52:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:52:40.456253 | orchestrator | 2026-04-13 02:52:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:52:40.457835 | orchestrator | 2026-04-13 02:52:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:52:40.457926 | orchestrator | 2026-04-13 02:52:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:52:43.506070 | orchestrator | 2026-04-13 02:52:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:52:43.507893 | orchestrator | 2026-04-13 02:52:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:52:43.508057 | orchestrator | 2026-04-13 02:52:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:52:46.558334 | orchestrator | 2026-04-13 02:52:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:52:46.564777 | orchestrator | 2026-04-13 02:52:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:52:46.564911 | orchestrator | 2026-04-13 02:52:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:52:49.615658 | orchestrator | 2026-04-13 02:52:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:52:49.617974 | orchestrator | 2026-04-13 02:52:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:52:49.618073 | orchestrator | 2026-04-13 02:52:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:52:52.666411 | orchestrator | 2026-04-13 02:52:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:52:52.669108 | orchestrator | 2026-04-13 02:52:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:52:52.669145 | orchestrator | 2026-04-13 02:52:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:52:55.714175 | orchestrator | 2026-04-13 02:52:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:52:55.716911 | orchestrator | 2026-04-13 02:52:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:52:55.717390 | orchestrator | 2026-04-13 02:52:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:52:58.770097 | orchestrator | 2026-04-13 02:52:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:52:58.772398 | orchestrator | 2026-04-13 02:52:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:52:58.772453 | orchestrator | 2026-04-13 02:52:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:53:01.828233 | orchestrator | 2026-04-13 02:53:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:55:01.942613 | orchestrator | 2026-04-13 02:55:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:55:01.942721 | orchestrator | 2026-04-13 02:55:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:55:04.987024 | orchestrator | 2026-04-13 02:55:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:55:04.988084 | orchestrator | 2026-04-13 02:55:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:55:04.988130 | orchestrator | 2026-04-13 02:55:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:55:08.029298 | orchestrator | 2026-04-13 02:55:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:55:08.030761 | orchestrator | 2026-04-13 02:55:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:55:08.030798 | orchestrator | 2026-04-13 02:55:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:55:11.072791 | orchestrator | 2026-04-13 02:55:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:55:11.075167 | orchestrator | 2026-04-13 02:55:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:55:11.075208 | orchestrator | 2026-04-13 02:55:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:55:14.122567 | orchestrator | 2026-04-13 02:55:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:55:14.124538 | orchestrator | 2026-04-13 02:55:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:55:14.124590 | orchestrator | 2026-04-13 02:55:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:55:17.171985 | orchestrator | 2026-04-13 02:55:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:55:17.173439 | orchestrator | 2026-04-13 02:55:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:55:17.173484 | orchestrator | 2026-04-13 02:55:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:55:20.227328 | orchestrator | 2026-04-13 02:55:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:55:20.230194 | orchestrator | 2026-04-13 02:55:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:55:20.230301 | orchestrator | 2026-04-13 02:55:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:55:23.278761 | orchestrator | 2026-04-13 02:55:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:55:23.281790 | orchestrator | 2026-04-13 02:55:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:55:23.281908 | orchestrator | 2026-04-13 02:55:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:55:26.329994 | orchestrator | 2026-04-13 02:55:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:55:26.332278 | orchestrator | 2026-04-13 02:55:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:55:26.332433 | orchestrator | 2026-04-13 02:55:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:55:29.381049 | orchestrator | 2026-04-13 02:55:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:55:29.382256 | orchestrator | 2026-04-13 02:55:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:55:29.382319 | orchestrator | 2026-04-13 02:55:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:55:32.423919 | orchestrator | 2026-04-13 02:55:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:55:32.425845 | orchestrator | 2026-04-13 02:55:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:55:32.425900 | orchestrator | 2026-04-13 02:55:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:55:35.466746 | orchestrator | 2026-04-13 02:55:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:55:35.468156 | orchestrator | 2026-04-13 02:55:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:55:35.468204 | orchestrator | 2026-04-13 02:55:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:55:38.511503 | orchestrator | 2026-04-13 02:55:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:55:38.516488 | orchestrator | 2026-04-13 02:55:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:55:38.516634 | orchestrator | 2026-04-13 02:55:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:55:41.563215 | orchestrator | 2026-04-13 02:55:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:55:41.564874 | orchestrator | 2026-04-13 02:55:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:55:41.564947 | orchestrator | 2026-04-13 02:55:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:55:44.611693 | orchestrator | 2026-04-13 02:55:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:55:44.614276 | orchestrator | 2026-04-13 02:55:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:55:44.614306 | orchestrator | 2026-04-13 02:55:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:55:47.654937 | orchestrator | 2026-04-13 02:55:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:55:47.655256 | orchestrator | 2026-04-13 02:55:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:55:47.655290 | orchestrator | 2026-04-13 02:55:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:55:50.705093 | orchestrator | 2026-04-13 02:55:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:55:50.706590 | orchestrator | 2026-04-13 02:55:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:55:50.706633 | orchestrator | 2026-04-13 02:55:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:55:53.753070 | orchestrator | 2026-04-13 02:55:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:55:53.755368 | orchestrator | 2026-04-13 02:55:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:55:53.755599 | orchestrator | 2026-04-13 02:55:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:55:56.801602 | orchestrator | 2026-04-13 02:55:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:55:56.803527 | orchestrator | 2026-04-13 02:55:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:55:56.803580 | orchestrator | 2026-04-13 02:55:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:55:59.851540 | orchestrator | 2026-04-13 02:55:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:55:59.853530 | orchestrator | 2026-04-13 02:55:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:55:59.853572 | orchestrator | 2026-04-13 02:55:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:56:02.896115 | orchestrator | 2026-04-13 02:56:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:56:02.897961 | orchestrator | 2026-04-13 02:56:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:56:02.898117 | orchestrator | 2026-04-13 02:56:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:56:05.948624 | orchestrator | 2026-04-13 02:56:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:56:05.950294 | orchestrator | 2026-04-13 02:56:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:56:05.950353 | orchestrator | 2026-04-13 02:56:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:56:09.003665 | orchestrator | 2026-04-13 02:56:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:56:09.005551 | orchestrator | 2026-04-13 02:56:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:56:09.005625 | orchestrator | 2026-04-13 02:56:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:56:12.051618 | orchestrator | 2026-04-13 02:56:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:56:12.055374 | orchestrator | 2026-04-13 02:56:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:56:12.055506 | orchestrator | 2026-04-13 02:56:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:56:15.095891 | orchestrator | 2026-04-13 02:56:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:56:15.097622 | orchestrator | 2026-04-13 02:56:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:56:15.097665 | orchestrator | 2026-04-13 02:56:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:56:18.140756 | orchestrator | 2026-04-13 02:56:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:56:18.143791 | orchestrator | 2026-04-13 02:56:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:56:18.143858 | orchestrator | 2026-04-13 02:56:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:56:21.191675 | orchestrator | 2026-04-13 02:56:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:56:21.193153 | orchestrator | 2026-04-13 02:56:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:56:21.193218 | orchestrator | 2026-04-13 02:56:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:56:24.236797 | orchestrator | 2026-04-13 02:56:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:56:24.243240 | orchestrator | 2026-04-13 02:56:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:56:24.243329 | orchestrator | 2026-04-13 02:56:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:56:27.294789 | orchestrator | 2026-04-13 02:56:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:56:27.295655 | orchestrator | 2026-04-13 02:56:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:56:27.295697 | orchestrator | 2026-04-13 02:56:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:56:30.338383 | orchestrator | 2026-04-13 02:56:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:56:30.339702 | orchestrator | 2026-04-13 02:56:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:56:30.339740 | orchestrator | 2026-04-13 02:56:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:56:33.387742 | orchestrator | 2026-04-13 02:56:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:56:33.391385 | orchestrator | 2026-04-13 02:56:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:56:33.391966 | orchestrator | 2026-04-13 02:56:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:56:36.442621 | orchestrator | 2026-04-13 02:56:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:56:36.444531 | orchestrator | 2026-04-13 02:56:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:56:36.444766 | orchestrator | 2026-04-13 02:56:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:56:39.490207 | orchestrator | 2026-04-13 02:56:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:56:39.491698 | orchestrator | 2026-04-13 02:56:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:56:39.491758 | orchestrator | 2026-04-13 02:56:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:56:42.537544 | orchestrator | 2026-04-13 02:56:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:56:42.540368 | orchestrator | 2026-04-13 02:56:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:56:42.540479 | orchestrator | 2026-04-13 02:56:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:56:45.588801 | orchestrator | 2026-04-13 02:56:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:56:45.590708 | orchestrator | 2026-04-13 02:56:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:56:45.590814 | orchestrator | 2026-04-13 02:56:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:56:48.634833 | orchestrator | 2026-04-13 02:56:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:56:48.635717 | orchestrator | 2026-04-13 02:56:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:56:48.635768 | orchestrator | 2026-04-13 02:56:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:56:51.680048 | orchestrator | 2026-04-13 02:56:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:56:51.681186 | orchestrator | 2026-04-13 02:56:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:56:51.681219 | orchestrator | 2026-04-13 02:56:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:56:54.728049 | orchestrator | 2026-04-13 02:56:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:56:54.730745 | orchestrator | 2026-04-13 02:56:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:56:54.730961 | orchestrator | 2026-04-13 02:56:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:56:57.774821 | orchestrator | 2026-04-13 02:56:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:56:57.776066 | orchestrator | 2026-04-13 02:56:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:56:57.776193 | orchestrator | 2026-04-13 02:56:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:57:00.823984 | orchestrator | 2026-04-13 02:57:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:57:00.824768 | orchestrator | 2026-04-13 02:57:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:57:00.824815 | orchestrator | 2026-04-13 02:57:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:57:03.867847 | orchestrator | 2026-04-13 02:57:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:57:03.869298 | orchestrator | 2026-04-13 02:57:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:57:03.870103 | orchestrator | 2026-04-13 02:57:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:57:06.920836 | orchestrator | 2026-04-13 02:57:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:57:06.921264 | orchestrator | 2026-04-13 02:57:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:57:06.921376 | orchestrator | 2026-04-13 02:57:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:57:09.975544 | orchestrator | 2026-04-13 02:57:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:57:09.976884 | orchestrator | 2026-04-13 02:57:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:57:09.976923 | orchestrator | 2026-04-13 02:57:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:57:13.029319 | orchestrator | 2026-04-13 02:57:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:57:13.031562 | orchestrator | 2026-04-13 02:57:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:57:13.031586 | orchestrator | 2026-04-13 02:57:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:57:16.082713 | orchestrator | 2026-04-13 02:57:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:57:16.083128 | orchestrator | 2026-04-13 02:57:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:57:16.083177 | orchestrator | 2026-04-13 02:57:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:57:19.124503 | orchestrator | 2026-04-13 02:57:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:57:19.126383 | orchestrator | 2026-04-13 02:57:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:57:19.126459 | orchestrator | 2026-04-13 02:57:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:57:22.174956 | orchestrator | 2026-04-13 02:57:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:57:22.177124 | orchestrator | 2026-04-13 02:57:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:57:22.177152 | orchestrator | 2026-04-13 02:57:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:57:25.214381 | orchestrator | 2026-04-13 02:57:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:57:25.217559 | orchestrator | 2026-04-13 02:57:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:57:25.217621 | orchestrator | 2026-04-13 02:57:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:57:28.253743 | orchestrator | 2026-04-13 02:57:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:57:28.255948 | orchestrator | 2026-04-13 02:57:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:57:28.256037 | orchestrator | 2026-04-13 02:57:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:57:31.295861 | orchestrator | 2026-04-13 02:57:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:57:31.297734 | orchestrator | 2026-04-13 02:57:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:57:31.297787 | orchestrator | 2026-04-13 02:57:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:57:34.338389 | orchestrator | 2026-04-13 02:57:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:57:34.339216 | orchestrator | 2026-04-13 02:57:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:57:34.339257 | orchestrator | 2026-04-13 02:57:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:57:37.401847 | orchestrator | 2026-04-13 02:57:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:57:37.402498 | orchestrator | 2026-04-13 02:57:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:57:37.402534 | orchestrator | 2026-04-13 02:57:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:57:40.450821 | orchestrator | 2026-04-13 02:57:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:57:40.453136 | orchestrator | 2026-04-13 02:57:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:57:40.453192 | orchestrator | 2026-04-13 02:57:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:57:43.498743 | orchestrator | 2026-04-13 02:57:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:57:43.499488 | orchestrator | 2026-04-13 02:57:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:57:43.499528 | orchestrator | 2026-04-13 02:57:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:57:46.546153 | orchestrator | 2026-04-13 02:57:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:57:46.549174 | orchestrator | 2026-04-13 02:57:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:57:46.549278 | orchestrator | 2026-04-13 02:57:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:57:49.588890 | orchestrator | 2026-04-13 02:57:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:57:49.589687 | orchestrator | 2026-04-13 02:57:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:57:49.589706 | orchestrator | 2026-04-13 02:57:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:57:52.624307 | orchestrator | 2026-04-13 02:57:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:57:52.625004 | orchestrator | 2026-04-13 02:57:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:57:52.625042 | orchestrator | 2026-04-13 02:57:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:57:55.666868 | orchestrator | 2026-04-13 02:57:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:57:55.668638 | orchestrator | 2026-04-13 02:57:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:57:55.668697 | orchestrator | 2026-04-13 02:57:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:57:58.718339 | orchestrator | 2026-04-13 02:57:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:57:58.719762 | orchestrator | 2026-04-13 02:57:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:57:58.719875 | orchestrator | 2026-04-13 02:57:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:58:01.765951 | orchestrator | 2026-04-13 02:58:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:58:01.768745 | orchestrator | 2026-04-13 02:58:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:58:01.768807 | orchestrator | 2026-04-13 02:58:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:58:04.810394 | orchestrator | 2026-04-13 02:58:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:58:04.812736 | orchestrator | 2026-04-13 02:58:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:58:04.812790 | orchestrator | 2026-04-13 02:58:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:58:07.859204 | orchestrator | 2026-04-13 02:58:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:58:07.860698 | orchestrator | 2026-04-13 02:58:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:58:07.860750 | orchestrator | 2026-04-13 02:58:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:58:10.907845 | orchestrator | 2026-04-13 02:58:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:58:10.909364 | orchestrator | 2026-04-13 02:58:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:58:10.909504 | orchestrator | 2026-04-13 02:58:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:58:13.961530 | orchestrator | 2026-04-13 02:58:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:58:13.961651 | orchestrator | 2026-04-13 02:58:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:58:13.961696 | orchestrator | 2026-04-13 02:58:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:58:17.005954 | orchestrator | 2026-04-13 02:58:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:58:17.007979 | orchestrator | 2026-04-13 02:58:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:58:17.008060 | orchestrator | 2026-04-13 02:58:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:58:20.052588 | orchestrator | 2026-04-13 02:58:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:58:20.054754 | orchestrator | 2026-04-13 02:58:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:58:20.054886 | orchestrator | 2026-04-13 02:58:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:58:23.096475 | orchestrator | 2026-04-13 02:58:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:58:23.098330 | orchestrator | 2026-04-13 02:58:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:58:23.098354 | orchestrator | 2026-04-13 02:58:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:58:26.163700 | orchestrator | 2026-04-13 02:58:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:58:26.163846 | orchestrator | 2026-04-13 02:58:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:58:26.164016 | orchestrator | 2026-04-13 02:58:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:58:29.205237 | orchestrator | 2026-04-13 02:58:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:58:29.207990 | orchestrator | 2026-04-13 02:58:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:58:29.208043 | orchestrator | 2026-04-13 02:58:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:58:32.251634 | orchestrator | 2026-04-13 02:58:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:58:32.252818 | orchestrator | 2026-04-13 02:58:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:58:32.252843 | orchestrator | 2026-04-13 02:58:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:58:35.298787 | orchestrator | 2026-04-13 02:58:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:58:35.300202 | orchestrator | 2026-04-13 02:58:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:58:35.300299 | orchestrator | 2026-04-13 02:58:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:58:38.349203 | orchestrator | 2026-04-13 02:58:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:58:38.350588 | orchestrator | 2026-04-13 02:58:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:58:38.350628 | orchestrator | 2026-04-13 02:58:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:58:41.403644 | orchestrator | 2026-04-13 02:58:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:58:41.405181 | orchestrator | 2026-04-13 02:58:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:58:41.405220 | orchestrator | 2026-04-13 02:58:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:58:44.459350 | orchestrator | 2026-04-13 02:58:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:58:44.460951 | orchestrator | 2026-04-13 02:58:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:58:44.461096 | orchestrator | 2026-04-13 02:58:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:58:47.506489 | orchestrator | 2026-04-13 02:58:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:58:47.508711 | orchestrator | 2026-04-13 02:58:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:58:47.508776 | orchestrator | 2026-04-13 02:58:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:58:50.558796 | orchestrator | 2026-04-13 02:58:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:58:50.561062 | orchestrator | 2026-04-13 02:58:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:58:50.561118 | orchestrator | 2026-04-13 02:58:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:58:53.614306 | orchestrator | 2026-04-13 02:58:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:58:53.616042 | orchestrator | 2026-04-13 02:58:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:58:53.616096 | orchestrator | 2026-04-13 02:58:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:58:56.668305 | orchestrator | 2026-04-13 02:58:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:58:56.670952 | orchestrator | 2026-04-13 02:58:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:58:56.671005 | orchestrator | 2026-04-13 02:58:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:58:59.715859 | orchestrator | 2026-04-13 02:58:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:58:59.717762 | orchestrator | 2026-04-13 02:58:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:58:59.717784 | orchestrator | 2026-04-13 02:58:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:59:02.761087 | orchestrator | 2026-04-13 02:59:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:59:02.763228 | orchestrator | 2026-04-13 02:59:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:59:02.763300 | orchestrator | 2026-04-13 02:59:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:59:05.810311 | orchestrator | 2026-04-13 02:59:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:59:05.815688 | orchestrator | 2026-04-13 02:59:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:59:05.815826 | orchestrator | 2026-04-13 02:59:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:59:08.867464 | orchestrator | 2026-04-13 02:59:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:59:08.870587 | orchestrator | 2026-04-13 02:59:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:59:08.870641 | orchestrator | 2026-04-13 02:59:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:59:11.922184 | orchestrator | 2026-04-13 02:59:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:59:11.924042 | orchestrator | 2026-04-13 02:59:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:59:11.924258 | orchestrator | 2026-04-13 02:59:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:59:14.976625 | orchestrator | 2026-04-13 02:59:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:59:14.979322 | orchestrator | 2026-04-13 02:59:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:59:14.979379 | orchestrator | 2026-04-13 02:59:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:59:18.029358 | orchestrator | 2026-04-13 02:59:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:59:18.031717 | orchestrator | 2026-04-13 02:59:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:59:18.031771 | orchestrator | 2026-04-13 02:59:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:59:21.080003 | orchestrator | 2026-04-13 02:59:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:59:21.082227 | orchestrator | 2026-04-13 02:59:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:59:21.082343 | orchestrator | 2026-04-13 02:59:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:59:24.128896 | orchestrator | 2026-04-13 02:59:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:59:24.131816 | orchestrator | 2026-04-13 02:59:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:59:24.131872 | orchestrator | 2026-04-13 02:59:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:59:27.177341 | orchestrator | 2026-04-13 02:59:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:59:27.179708 | orchestrator | 2026-04-13 02:59:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:59:27.179964 | orchestrator | 2026-04-13 02:59:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:59:30.226817 | orchestrator | 2026-04-13 02:59:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:59:30.229168 | orchestrator | 2026-04-13 02:59:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:59:30.229301 | orchestrator | 2026-04-13 02:59:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:59:33.277471 | orchestrator | 2026-04-13 02:59:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:59:33.279048 | orchestrator | 2026-04-13 02:59:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:59:33.279190 | orchestrator | 2026-04-13 02:59:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:59:36.327401 | orchestrator | 2026-04-13 02:59:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:59:36.328691 | orchestrator | 2026-04-13 02:59:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:59:36.328796 | orchestrator | 2026-04-13 02:59:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:59:39.379244 | orchestrator | 2026-04-13 02:59:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:59:39.380413 | orchestrator | 2026-04-13 02:59:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:59:39.380474 | orchestrator | 2026-04-13 02:59:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:59:42.431397 | orchestrator | 2026-04-13 02:59:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:59:42.435025 | orchestrator | 2026-04-13 02:59:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:59:42.435132 | orchestrator | 2026-04-13 02:59:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:59:45.489655 | orchestrator | 2026-04-13 02:59:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:59:45.492473 | orchestrator | 2026-04-13 02:59:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:59:45.492650 | orchestrator | 2026-04-13 02:59:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:59:48.538636 | orchestrator | 2026-04-13 02:59:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:59:48.541572 | orchestrator | 2026-04-13 02:59:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:59:48.541636 | orchestrator | 2026-04-13 02:59:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:59:51.583714 | orchestrator | 2026-04-13 02:59:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:59:51.585195 | orchestrator | 2026-04-13 02:59:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:59:51.585231 | orchestrator | 2026-04-13 02:59:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:59:54.630702 | orchestrator | 2026-04-13 02:59:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:59:54.632478 | orchestrator | 2026-04-13 02:59:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:59:54.632649 | orchestrator | 2026-04-13 02:59:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 02:59:57.688369 | orchestrator | 2026-04-13 02:59:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 02:59:57.690305 | orchestrator | 2026-04-13 02:59:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 02:59:57.690356 | orchestrator | 2026-04-13 02:59:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:00:00.735851 | orchestrator | 2026-04-13 03:00:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:00:00.737844 | orchestrator | 2026-04-13 03:00:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:00:00.737898 | orchestrator | 2026-04-13 03:00:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:00:03.786653 | orchestrator | 2026-04-13 03:00:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:00:03.789571 | orchestrator | 2026-04-13 03:00:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:00:03.789638 | orchestrator | 2026-04-13 03:00:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:00:06.835762 | orchestrator | 2026-04-13 03:00:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:00:06.838007 | orchestrator | 2026-04-13 03:00:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:00:06.838113 | orchestrator | 2026-04-13 03:00:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:00:09.884743 | orchestrator | 2026-04-13 03:00:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:00:09.886481 | orchestrator | 2026-04-13 03:00:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:00:09.886523 | orchestrator | 2026-04-13 03:00:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:00:12.942271 | orchestrator | 2026-04-13 03:00:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:00:12.944004 | orchestrator | 2026-04-13 03:00:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:00:12.944048 | orchestrator | 2026-04-13 03:00:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:00:15.984501 | orchestrator | 2026-04-13 03:00:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:00:15.986229 | orchestrator | 2026-04-13 03:00:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:00:15.986273 | orchestrator | 2026-04-13 03:00:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:00:19.032834 | orchestrator | 2026-04-13 03:00:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:00:19.034778 | orchestrator | 2026-04-13 03:00:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:00:19.034826 | orchestrator | 2026-04-13 03:00:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:00:22.079736 | orchestrator | 2026-04-13 03:00:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:00:22.082083 | orchestrator | 2026-04-13 03:00:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:00:22.082150 | orchestrator | 2026-04-13 03:00:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:00:25.128040 | orchestrator | 2026-04-13 03:00:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:00:25.130340 | orchestrator | 2026-04-13 03:00:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:00:25.130617 | orchestrator | 2026-04-13 03:00:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:00:28.170256 | orchestrator | 2026-04-13 03:00:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:00:28.171416 | orchestrator | 2026-04-13 03:00:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:00:28.171520 | orchestrator | 2026-04-13 03:00:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:00:31.215774 | orchestrator | 2026-04-13 03:00:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:00:31.217077 | orchestrator | 2026-04-13 03:00:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:00:31.217116 | orchestrator | 2026-04-13 03:00:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:00:34.265104 | orchestrator | 2026-04-13 03:00:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:00:34.266735 | orchestrator | 2026-04-13 03:00:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:00:34.266788 | orchestrator | 2026-04-13 03:00:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:00:37.309155 | orchestrator | 2026-04-13 03:00:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:00:37.313901 | orchestrator | 2026-04-13 03:00:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:00:37.313964 | orchestrator | 2026-04-13 03:00:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:00:40.356991 | orchestrator | 2026-04-13 03:00:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:00:40.359200 | orchestrator | 2026-04-13 03:00:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:00:40.359262 | orchestrator | 2026-04-13 03:00:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:00:43.410211 | orchestrator | 2026-04-13 03:00:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:00:43.411898 | orchestrator | 2026-04-13 03:00:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:00:43.411935 | orchestrator | 2026-04-13 03:00:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:00:46.459401 | orchestrator | 2026-04-13 03:00:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:00:46.460723 | orchestrator | 2026-04-13 03:00:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:00:46.460772 | orchestrator | 2026-04-13 03:00:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:00:49.505164 | orchestrator | 2026-04-13 03:00:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:00:49.505289 | orchestrator | 2026-04-13 03:00:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:00:49.505313 | orchestrator | 2026-04-13 03:00:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:00:52.552656 | orchestrator | 2026-04-13 03:00:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:00:52.555046 | orchestrator | 2026-04-13 03:00:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:00:52.555107 | orchestrator | 2026-04-13 03:00:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:00:55.598600 | orchestrator | 2026-04-13 03:00:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:00:55.600353 | orchestrator | 2026-04-13 03:00:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:00:55.600406 | orchestrator | 2026-04-13 03:00:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:00:58.649662 | orchestrator | 2026-04-13 03:00:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:00:58.652357 | orchestrator | 2026-04-13 03:00:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:00:58.652383 | orchestrator | 2026-04-13 03:00:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:01:01.697245 | orchestrator | 2026-04-13 03:01:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:01:01.698861 | orchestrator | 2026-04-13 03:01:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:01:01.698925 | orchestrator | 2026-04-13 03:01:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:01:04.744248 | orchestrator | 2026-04-13 03:01:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:01:04.744929 | orchestrator | 2026-04-13 03:01:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:01:04.744955 | orchestrator | 2026-04-13 03:01:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:01:07.797017 | orchestrator | 2026-04-13 03:01:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:01:07.802803 | orchestrator | 2026-04-13 03:01:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:01:07.802884 | orchestrator | 2026-04-13 03:01:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:01:10.858744 | orchestrator | 2026-04-13 03:01:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:01:10.860661 | orchestrator | 2026-04-13 03:01:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:01:10.860734 | orchestrator | 2026-04-13 03:01:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:01:13.912535 | orchestrator | 2026-04-13 03:01:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:01:13.915180 | orchestrator | 2026-04-13 03:01:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:01:13.915252 | orchestrator | 2026-04-13 03:01:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:01:16.973751 | orchestrator | 2026-04-13 03:01:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:01:16.975441 | orchestrator | 2026-04-13 03:01:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:01:16.975549 | orchestrator | 2026-04-13 03:01:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:01:20.030131 | orchestrator | 2026-04-13 03:01:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:01:20.032127 | orchestrator | 2026-04-13 03:01:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:01:20.032274 | orchestrator | 2026-04-13 03:01:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:01:23.088827 | orchestrator | 2026-04-13 03:01:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:01:23.091255 | orchestrator | 2026-04-13 03:01:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:01:23.092229 | orchestrator | 2026-04-13 03:01:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:01:26.130885 | orchestrator | 2026-04-13 03:01:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:01:26.134666 | orchestrator | 2026-04-13 03:01:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:01:26.134773 | orchestrator | 2026-04-13 03:01:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:01:29.192663 | orchestrator | 2026-04-13 03:01:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:01:29.195060 | orchestrator | 2026-04-13 03:01:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:01:29.195130 | orchestrator | 2026-04-13 03:01:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:01:32.245769 | orchestrator | 2026-04-13 03:01:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:01:32.248008 | orchestrator | 2026-04-13 03:01:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:01:32.248062 | orchestrator | 2026-04-13 03:01:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:01:35.298364 | orchestrator | 2026-04-13 03:01:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:01:35.300592 | orchestrator | 2026-04-13 03:01:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:01:35.300638 | orchestrator | 2026-04-13 03:01:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:01:38.355399 | orchestrator | 2026-04-13 03:01:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:01:38.355754 | orchestrator | 2026-04-13 03:01:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:01:38.355789 | orchestrator | 2026-04-13 03:01:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:01:41.405679 | orchestrator | 2026-04-13 03:01:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:01:41.407211 | orchestrator | 2026-04-13 03:01:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:01:41.407278 | orchestrator | 2026-04-13 03:01:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:01:44.448578 | orchestrator | 2026-04-13 03:01:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:01:44.449340 | orchestrator | 2026-04-13 03:01:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:01:44.449436 | orchestrator | 2026-04-13 03:01:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:01:47.493129 | orchestrator | 2026-04-13 03:01:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:01:47.495175 | orchestrator | 2026-04-13 03:01:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:01:47.495273 | orchestrator | 2026-04-13 03:01:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:01:50.545650 | orchestrator | 2026-04-13 03:01:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:01:50.547027 | orchestrator | 2026-04-13 03:01:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:01:50.547059 | orchestrator | 2026-04-13 03:01:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:01:53.594371 | orchestrator | 2026-04-13 03:01:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:01:53.595943 | orchestrator | 2026-04-13 03:01:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:01:53.596124 | orchestrator | 2026-04-13 03:01:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:01:56.640752 | orchestrator | 2026-04-13 03:01:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:01:56.642128 | orchestrator | 2026-04-13 03:01:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:01:56.642191 | orchestrator | 2026-04-13 03:01:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:01:59.691646 | orchestrator | 2026-04-13 03:01:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:01:59.695280 | orchestrator | 2026-04-13 03:01:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:01:59.695372 | orchestrator | 2026-04-13 03:01:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:02:02.751000 | orchestrator | 2026-04-13 03:02:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:02:02.751739 | orchestrator | 2026-04-13 03:02:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:02:02.751886 | orchestrator | 2026-04-13 03:02:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:02:05.799082 | orchestrator | 2026-04-13 03:02:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:02:05.801874 | orchestrator | 2026-04-13 03:02:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:02:05.801928 | orchestrator | 2026-04-13 03:02:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:02:08.868022 | orchestrator | 2026-04-13 03:02:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:02:08.870438 | orchestrator | 2026-04-13 03:02:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:02:08.870546 | orchestrator | 2026-04-13 03:02:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:02:11.927008 | orchestrator | 2026-04-13 03:02:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:02:11.929075 | orchestrator | 2026-04-13 03:02:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:02:11.929112 | orchestrator | 2026-04-13 03:02:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:02:14.978941 | orchestrator | 2026-04-13 03:02:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:02:14.979945 | orchestrator | 2026-04-13 03:02:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:02:14.980126 | orchestrator | 2026-04-13 03:02:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:02:18.028574 | orchestrator | 2026-04-13 03:02:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:02:18.029998 | orchestrator | 2026-04-13 03:02:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:02:18.030117 | orchestrator | 2026-04-13 03:02:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:02:21.067334 | orchestrator | 2026-04-13 03:02:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:02:21.070297 | orchestrator | 2026-04-13 03:02:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:02:21.070391 | orchestrator | 2026-04-13 03:02:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:02:24.117624 | orchestrator | 2026-04-13 03:02:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:02:24.118760 | orchestrator | 2026-04-13 03:02:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:02:24.118789 | orchestrator | 2026-04-13 03:02:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:02:27.167081 | orchestrator | 2026-04-13 03:02:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:02:27.168364 | orchestrator | 2026-04-13 03:02:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:02:27.168526 | orchestrator | 2026-04-13 03:02:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:02:30.223192 | orchestrator | 2026-04-13 03:02:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:02:30.224108 | orchestrator | 2026-04-13 03:02:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:02:30.224158 | orchestrator | 2026-04-13 03:02:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:02:33.278770 | orchestrator | 2026-04-13 03:02:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:02:33.282711 | orchestrator | 2026-04-13 03:02:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:02:33.282802 | orchestrator | 2026-04-13 03:02:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:02:36.328742 | orchestrator | 2026-04-13 03:02:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:02:36.330269 | orchestrator | 2026-04-13 03:02:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:02:36.330347 | orchestrator | 2026-04-13 03:02:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:02:39.375091 | orchestrator | 2026-04-13 03:02:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:02:39.378956 | orchestrator | 2026-04-13 03:02:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:02:39.379075 | orchestrator | 2026-04-13 03:02:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:02:42.416648 | orchestrator | 2026-04-13 03:02:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:02:42.417354 | orchestrator | 2026-04-13 03:02:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:02:42.417636 | orchestrator | 2026-04-13 03:02:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:02:45.467900 | orchestrator | 2026-04-13 03:02:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:02:45.469121 | orchestrator | 2026-04-13 03:02:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:02:45.469151 | orchestrator | 2026-04-13 03:02:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:02:48.513361 | orchestrator | 2026-04-13 03:02:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:02:48.513964 | orchestrator | 2026-04-13 03:02:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:02:48.513987 | orchestrator | 2026-04-13 03:02:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:02:51.563037 | orchestrator | 2026-04-13 03:02:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:02:51.564887 | orchestrator | 2026-04-13 03:02:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:02:51.565046 | orchestrator | 2026-04-13 03:02:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:02:54.614912 | orchestrator | 2026-04-13 03:02:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:02:54.617241 | orchestrator | 2026-04-13 03:02:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:02:54.617322 | orchestrator | 2026-04-13 03:02:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:02:57.668211 | orchestrator | 2026-04-13 03:02:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:02:57.669627 | orchestrator | 2026-04-13 03:02:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:02:57.669654 | orchestrator | 2026-04-13 03:02:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:03:00.729239 | orchestrator | 2026-04-13 03:03:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:03:00.731323 | orchestrator | 2026-04-13 03:03:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:03:00.731409 | orchestrator | 2026-04-13 03:03:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:03:03.784672 | orchestrator | 2026-04-13 03:03:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:03:03.786806 | orchestrator | 2026-04-13 03:03:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:03:03.786854 | orchestrator | 2026-04-13 03:03:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:03:06.838928 | orchestrator | 2026-04-13 03:03:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:03:06.840705 | orchestrator | 2026-04-13 03:03:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:03:06.840801 | orchestrator | 2026-04-13 03:03:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:03:09.891018 | orchestrator | 2026-04-13 03:03:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:03:09.899700 | orchestrator | 2026-04-13 03:03:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:03:09.899789 | orchestrator | 2026-04-13 03:03:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:03:12.950318 | orchestrator | 2026-04-13 03:03:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:03:12.954213 | orchestrator | 2026-04-13 03:03:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:03:12.954277 | orchestrator | 2026-04-13 03:03:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:03:16.007248 | orchestrator | 2026-04-13 03:03:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:03:16.010560 | orchestrator | 2026-04-13 03:03:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:03:16.010631 | orchestrator | 2026-04-13 03:03:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:03:19.057361 | orchestrator | 2026-04-13 03:03:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:03:19.058864 | orchestrator | 2026-04-13 03:03:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:03:19.058937 | orchestrator | 2026-04-13 03:03:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:03:22.112936 | orchestrator | 2026-04-13 03:03:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:03:22.115337 | orchestrator | 2026-04-13 03:03:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:03:22.115437 | orchestrator | 2026-04-13 03:03:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:03:25.168984 | orchestrator | 2026-04-13 03:03:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:03:25.173702 | orchestrator | 2026-04-13 03:03:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:03:25.173838 | orchestrator | 2026-04-13 03:03:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:03:28.225412 | orchestrator | 2026-04-13 03:03:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:03:28.228805 | orchestrator | 2026-04-13 03:03:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:03:28.228871 | orchestrator | 2026-04-13 03:03:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:03:31.287012 | orchestrator | 2026-04-13 03:03:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:03:31.289882 | orchestrator | 2026-04-13 03:03:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:03:31.289954 | orchestrator | 2026-04-13 03:03:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:03:34.360855 | orchestrator | 2026-04-13 03:03:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:03:34.362362 | orchestrator | 2026-04-13 03:03:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:03:34.362413 | orchestrator | 2026-04-13 03:03:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:03:37.413437 | orchestrator | 2026-04-13 03:03:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:03:37.415680 | orchestrator | 2026-04-13 03:03:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:03:37.415725 | orchestrator | 2026-04-13 03:03:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:03:40.462567 | orchestrator | 2026-04-13 03:03:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:03:40.463307 | orchestrator | 2026-04-13 03:03:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:03:40.464117 | orchestrator | 2026-04-13 03:03:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:03:43.516614 | orchestrator | 2026-04-13 03:03:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:03:43.517980 | orchestrator | 2026-04-13 03:03:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:03:43.518125 | orchestrator | 2026-04-13 03:03:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:03:46.574699 | orchestrator | 2026-04-13 03:03:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:03:46.577532 | orchestrator | 2026-04-13 03:03:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:03:46.577592 | orchestrator | 2026-04-13 03:03:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:03:49.630225 | orchestrator | 2026-04-13 03:03:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:03:49.633078 | orchestrator | 2026-04-13 03:03:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:03:49.633124 | orchestrator | 2026-04-13 03:03:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:03:52.685821 | orchestrator | 2026-04-13 03:03:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:03:52.687732 | orchestrator | 2026-04-13 03:03:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:03:52.687770 | orchestrator | 2026-04-13 03:03:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:03:55.742194 | orchestrator | 2026-04-13 03:03:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:03:55.744721 | orchestrator | 2026-04-13 03:03:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:03:55.744760 | orchestrator | 2026-04-13 03:03:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:03:58.794939 | orchestrator | 2026-04-13 03:03:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:03:58.795111 | orchestrator | 2026-04-13 03:03:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:03:58.795128 | orchestrator | 2026-04-13 03:03:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:04:01.844063 | orchestrator | 2026-04-13 03:04:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:04:01.846757 | orchestrator | 2026-04-13 03:04:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:04:01.846846 | orchestrator | 2026-04-13 03:04:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:04:04.901800 | orchestrator | 2026-04-13 03:04:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:04:04.904339 | orchestrator | 2026-04-13 03:04:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:04:04.904400 | orchestrator | 2026-04-13 03:04:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:04:07.952744 | orchestrator | 2026-04-13 03:04:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:04:07.956144 | orchestrator | 2026-04-13 03:04:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:04:07.956258 | orchestrator | 2026-04-13 03:04:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:04:11.015780 | orchestrator | 2026-04-13 03:04:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:04:11.017015 | orchestrator | 2026-04-13 03:04:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:04:11.017054 | orchestrator | 2026-04-13 03:04:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:04:14.069339 | orchestrator | 2026-04-13 03:04:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:04:14.071093 | orchestrator | 2026-04-13 03:04:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:04:14.071195 | orchestrator | 2026-04-13 03:04:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:04:17.130197 | orchestrator | 2026-04-13 03:04:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:04:17.131079 | orchestrator | 2026-04-13 03:04:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:04:17.131109 | orchestrator | 2026-04-13 03:04:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:04:20.186602 | orchestrator | 2026-04-13 03:04:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:04:20.188589 | orchestrator | 2026-04-13 03:04:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:04:20.188633 | orchestrator | 2026-04-13 03:04:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:04:23.239651 | orchestrator | 2026-04-13 03:04:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:04:23.240746 | orchestrator | 2026-04-13 03:04:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:04:23.240774 | orchestrator | 2026-04-13 03:04:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:04:26.292009 | orchestrator | 2026-04-13 03:04:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:04:26.294573 | orchestrator | 2026-04-13 03:04:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:04:26.294620 | orchestrator | 2026-04-13 03:04:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:04:29.345784 | orchestrator | 2026-04-13 03:04:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:04:29.348354 | orchestrator | 2026-04-13 03:04:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:04:29.348437 | orchestrator | 2026-04-13 03:04:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:04:32.410787 | orchestrator | 2026-04-13 03:04:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:04:32.412481 | orchestrator | 2026-04-13 03:04:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:04:32.412608 | orchestrator | 2026-04-13 03:04:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:04:35.463105 | orchestrator | 2026-04-13 03:04:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:04:35.464706 | orchestrator | 2026-04-13 03:04:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:04:35.464804 | orchestrator | 2026-04-13 03:04:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:04:38.520215 | orchestrator | 2026-04-13 03:04:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:04:38.522353 | orchestrator | 2026-04-13 03:04:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:04:38.522394 | orchestrator | 2026-04-13 03:04:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:04:41.565306 | orchestrator | 2026-04-13 03:04:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:04:41.565937 | orchestrator | 2026-04-13 03:04:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:04:41.565963 | orchestrator | 2026-04-13 03:04:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:04:44.615257 | orchestrator | 2026-04-13 03:04:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:04:44.617747 | orchestrator | 2026-04-13 03:04:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:04:44.617804 | orchestrator | 2026-04-13 03:04:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:04:47.684915 | orchestrator | 2026-04-13 03:04:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:04:47.685741 | orchestrator | 2026-04-13 03:04:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:04:47.685797 | orchestrator | 2026-04-13 03:04:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:04:50.729763 | orchestrator | 2026-04-13 03:04:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:04:50.730097 | orchestrator | 2026-04-13 03:04:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:04:50.730120 | orchestrator | 2026-04-13 03:04:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:04:53.788591 | orchestrator | 2026-04-13 03:04:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:04:53.790541 | orchestrator | 2026-04-13 03:04:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:04:53.790625 | orchestrator | 2026-04-13 03:04:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:04:56.844822 | orchestrator | 2026-04-13 03:04:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:04:56.848535 | orchestrator | 2026-04-13 03:04:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:04:56.848640 | orchestrator | 2026-04-13 03:04:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:04:59.898814 | orchestrator | 2026-04-13 03:04:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:04:59.899986 | orchestrator | 2026-04-13 03:04:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:04:59.900046 | orchestrator | 2026-04-13 03:04:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:05:02.958635 | orchestrator | 2026-04-13 03:05:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:05:02.962216 | orchestrator | 2026-04-13 03:05:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:05:02.962316 | orchestrator | 2026-04-13 03:05:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:05:06.017177 | orchestrator | 2026-04-13 03:05:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:05:06.020704 | orchestrator | 2026-04-13 03:05:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:05:06.021208 | orchestrator | 2026-04-13 03:05:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:05:09.076575 | orchestrator | 2026-04-13 03:05:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:05:09.077738 | orchestrator | 2026-04-13 03:05:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:05:09.077773 | orchestrator | 2026-04-13 03:05:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:05:12.125021 | orchestrator | 2026-04-13 03:05:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:05:12.126734 | orchestrator | 2026-04-13 03:05:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:05:12.126777 | orchestrator | 2026-04-13 03:05:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:05:15.180022 | orchestrator | 2026-04-13 03:05:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:05:15.181791 | orchestrator | 2026-04-13 03:05:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:05:15.182051 | orchestrator | 2026-04-13 03:05:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:05:18.236625 | orchestrator | 2026-04-13 03:05:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:05:18.238356 | orchestrator | 2026-04-13 03:05:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:05:18.238430 | orchestrator | 2026-04-13 03:05:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:05:21.292925 | orchestrator | 2026-04-13 03:05:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:05:21.295229 | orchestrator | 2026-04-13 03:05:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:05:21.295297 | orchestrator | 2026-04-13 03:05:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:05:24.344772 | orchestrator | 2026-04-13 03:05:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:05:24.347485 | orchestrator | 2026-04-13 03:05:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:05:24.347984 | orchestrator | 2026-04-13 03:05:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:05:27.403053 | orchestrator | 2026-04-13 03:05:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:05:27.405753 | orchestrator | 2026-04-13 03:05:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:05:27.405810 | orchestrator | 2026-04-13 03:05:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:05:30.460624 | orchestrator | 2026-04-13 03:05:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:05:30.466574 | orchestrator | 2026-04-13 03:05:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:05:30.466634 | orchestrator | 2026-04-13 03:05:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:05:33.518599 | orchestrator | 2026-04-13 03:05:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:05:33.519344 | orchestrator | 2026-04-13 03:05:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:05:33.519614 | orchestrator | 2026-04-13 03:05:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:05:36.572769 | orchestrator | 2026-04-13 03:05:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:05:36.575810 | orchestrator | 2026-04-13 03:05:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:05:36.575888 | orchestrator | 2026-04-13 03:05:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:05:39.628140 | orchestrator | 2026-04-13 03:05:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:05:39.630367 | orchestrator | 2026-04-13 03:05:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:05:39.630605 | orchestrator | 2026-04-13 03:05:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:05:42.681149 | orchestrator | 2026-04-13 03:05:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:05:42.682372 | orchestrator | 2026-04-13 03:05:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:05:42.682430 | orchestrator | 2026-04-13 03:05:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:05:45.727496 | orchestrator | 2026-04-13 03:05:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:05:45.729039 | orchestrator | 2026-04-13 03:05:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:05:45.729088 | orchestrator | 2026-04-13 03:05:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:05:48.783482 | orchestrator | 2026-04-13 03:05:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:05:48.784975 | orchestrator | 2026-04-13 03:05:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:05:48.785076 | orchestrator | 2026-04-13 03:05:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:05:51.823140 | orchestrator | 2026-04-13 03:05:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:05:51.824326 | orchestrator | 2026-04-13 03:05:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:05:51.824367 | orchestrator | 2026-04-13 03:05:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:05:54.875230 | orchestrator | 2026-04-13 03:05:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:05:54.876413 | orchestrator | 2026-04-13 03:05:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:05:54.876651 | orchestrator | 2026-04-13 03:05:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:05:57.931541 | orchestrator | 2026-04-13 03:05:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:05:57.933983 | orchestrator | 2026-04-13 03:05:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:05:57.934294 | orchestrator | 2026-04-13 03:05:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:06:00.982275 | orchestrator | 2026-04-13 03:06:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:06:00.983286 | orchestrator | 2026-04-13 03:06:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:06:00.983334 | orchestrator | 2026-04-13 03:06:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:06:04.038157 | orchestrator | 2026-04-13 03:06:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:06:04.038747 | orchestrator | 2026-04-13 03:06:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:06:04.038782 | orchestrator | 2026-04-13 03:06:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:06:07.080840 | orchestrator | 2026-04-13 03:06:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:06:07.082235 | orchestrator | 2026-04-13 03:06:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:06:07.082390 | orchestrator | 2026-04-13 03:06:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:06:10.130250 | orchestrator | 2026-04-13 03:06:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:06:10.133319 | orchestrator | 2026-04-13 03:06:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:06:10.133377 | orchestrator | 2026-04-13 03:06:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:06:13.190181 | orchestrator | 2026-04-13 03:06:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:06:13.192011 | orchestrator | 2026-04-13 03:06:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:06:13.192061 | orchestrator | 2026-04-13 03:06:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:06:16.242760 | orchestrator | 2026-04-13 03:06:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:06:16.244911 | orchestrator | 2026-04-13 03:06:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:06:16.244992 | orchestrator | 2026-04-13 03:06:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:06:19.293836 | orchestrator | 2026-04-13 03:06:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:06:19.295215 | orchestrator | 2026-04-13 03:06:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:06:19.295263 | orchestrator | 2026-04-13 03:06:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:06:22.349603 | orchestrator | 2026-04-13 03:06:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:06:22.351951 | orchestrator | 2026-04-13 03:06:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:06:22.351995 | orchestrator | 2026-04-13 03:06:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:06:25.396333 | orchestrator | 2026-04-13 03:06:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:06:25.398512 | orchestrator | 2026-04-13 03:06:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:06:25.398593 | orchestrator | 2026-04-13 03:06:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:06:28.452082 | orchestrator | 2026-04-13 03:06:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:06:28.454881 | orchestrator | 2026-04-13 03:06:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:06:28.454928 | orchestrator | 2026-04-13 03:06:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:06:31.500784 | orchestrator | 2026-04-13 03:06:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:06:31.502797 | orchestrator | 2026-04-13 03:06:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:06:31.502852 | orchestrator | 2026-04-13 03:06:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:06:34.547886 | orchestrator | 2026-04-13 03:06:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:06:34.549794 | orchestrator | 2026-04-13 03:06:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:06:34.549864 | orchestrator | 2026-04-13 03:06:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:06:37.598160 | orchestrator | 2026-04-13 03:06:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:06:37.599680 | orchestrator | 2026-04-13 03:06:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:06:37.599734 | orchestrator | 2026-04-13 03:06:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:06:40.656361 | orchestrator | 2026-04-13 03:06:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:06:40.658988 | orchestrator | 2026-04-13 03:06:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:06:40.659125 | orchestrator | 2026-04-13 03:06:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:06:43.703606 | orchestrator | 2026-04-13 03:06:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:06:43.705591 | orchestrator | 2026-04-13 03:06:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:06:43.705657 | orchestrator | 2026-04-13 03:06:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:06:46.750704 | orchestrator | 2026-04-13 03:06:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:06:46.753931 | orchestrator | 2026-04-13 03:06:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:06:46.753978 | orchestrator | 2026-04-13 03:06:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:06:49.797274 | orchestrator | 2026-04-13 03:06:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:06:49.797472 | orchestrator | 2026-04-13 03:06:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:06:49.797490 | orchestrator | 2026-04-13 03:06:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:06:52.842459 | orchestrator | 2026-04-13 03:06:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:06:52.844199 | orchestrator | 2026-04-13 03:06:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:06:52.844252 | orchestrator | 2026-04-13 03:06:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:06:55.903629 | orchestrator | 2026-04-13 03:06:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:06:55.904810 | orchestrator | 2026-04-13 03:06:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:06:55.904858 | orchestrator | 2026-04-13 03:06:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:06:58.950715 | orchestrator | 2026-04-13 03:06:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:06:58.952322 | orchestrator | 2026-04-13 03:06:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:06:58.952370 | orchestrator | 2026-04-13 03:06:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:07:02.006640 | orchestrator | 2026-04-13 03:07:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:07:02.008106 | orchestrator | 2026-04-13 03:07:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:07:02.008162 | orchestrator | 2026-04-13 03:07:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:07:05.064287 | orchestrator | 2026-04-13 03:07:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:07:05.066199 | orchestrator | 2026-04-13 03:07:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:07:05.066270 | orchestrator | 2026-04-13 03:07:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:07:08.118298 | orchestrator | 2026-04-13 03:07:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:07:08.119395 | orchestrator | 2026-04-13 03:07:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:07:08.119436 | orchestrator | 2026-04-13 03:07:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:07:11.172164 | orchestrator | 2026-04-13 03:07:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:07:11.174272 | orchestrator | 2026-04-13 03:07:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:07:11.174325 | orchestrator | 2026-04-13 03:07:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:07:14.231682 | orchestrator | 2026-04-13 03:07:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:07:14.233861 | orchestrator | 2026-04-13 03:07:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:07:14.233904 | orchestrator | 2026-04-13 03:07:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:07:17.283206 | orchestrator | 2026-04-13 03:07:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:07:17.285285 | orchestrator | 2026-04-13 03:07:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:07:17.285318 | orchestrator | 2026-04-13 03:07:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:07:20.337058 | orchestrator | 2026-04-13 03:07:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:07:20.338730 | orchestrator | 2026-04-13 03:07:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:07:20.338791 | orchestrator | 2026-04-13 03:07:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:07:23.391226 | orchestrator | 2026-04-13 03:07:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:07:23.392795 | orchestrator | 2026-04-13 03:07:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:07:23.392849 | orchestrator | 2026-04-13 03:07:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:07:26.444508 | orchestrator | 2026-04-13 03:07:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:07:26.446974 | orchestrator | 2026-04-13 03:07:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:07:26.447077 | orchestrator | 2026-04-13 03:07:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:07:29.500649 | orchestrator | 2026-04-13 03:07:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:07:29.503010 | orchestrator | 2026-04-13 03:07:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:07:29.503085 | orchestrator | 2026-04-13 03:07:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:07:32.548250 | orchestrator | 2026-04-13 03:07:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:07:32.550481 | orchestrator | 2026-04-13 03:07:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:07:32.550542 | orchestrator | 2026-04-13 03:07:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:07:35.601997 | orchestrator | 2026-04-13 03:07:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:07:35.603255 | orchestrator | 2026-04-13 03:07:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:07:35.603324 | orchestrator | 2026-04-13 03:07:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:07:38.656091 | orchestrator | 2026-04-13 03:07:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:07:38.657771 | orchestrator | 2026-04-13 03:07:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:07:38.657933 | orchestrator | 2026-04-13 03:07:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:07:41.705815 | orchestrator | 2026-04-13 03:07:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:07:41.706845 | orchestrator | 2026-04-13 03:07:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:07:41.707297 | orchestrator | 2026-04-13 03:07:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:07:44.755458 | orchestrator | 2026-04-13 03:07:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:07:44.757595 | orchestrator | 2026-04-13 03:07:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:07:44.757655 | orchestrator | 2026-04-13 03:07:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:07:47.809634 | orchestrator | 2026-04-13 03:07:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:07:47.812291 | orchestrator | 2026-04-13 03:07:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:07:47.812375 | orchestrator | 2026-04-13 03:07:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:07:50.867982 | orchestrator | 2026-04-13 03:07:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:07:50.869494 | orchestrator | 2026-04-13 03:07:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:07:50.869549 | orchestrator | 2026-04-13 03:07:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:07:53.932564 | orchestrator | 2026-04-13 03:07:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:07:53.935385 | orchestrator | 2026-04-13 03:07:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:07:53.935495 | orchestrator | 2026-04-13 03:07:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:07:56.990242 | orchestrator | 2026-04-13 03:07:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:07:56.991871 | orchestrator | 2026-04-13 03:07:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:07:56.992063 | orchestrator | 2026-04-13 03:07:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:08:00.044664 | orchestrator | 2026-04-13 03:08:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:08:00.047295 | orchestrator | 2026-04-13 03:08:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:08:00.047403 | orchestrator | 2026-04-13 03:08:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:08:03.092602 | orchestrator | 2026-04-13 03:08:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:08:03.096369 | orchestrator | 2026-04-13 03:08:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:08:03.096479 | orchestrator | 2026-04-13 03:08:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:08:06.158558 | orchestrator | 2026-04-13 03:08:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:08:06.161767 | orchestrator | 2026-04-13 03:08:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:08:06.162279 | orchestrator | 2026-04-13 03:08:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:08:09.219897 | orchestrator | 2026-04-13 03:08:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:08:09.223089 | orchestrator | 2026-04-13 03:08:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:08:09.223297 | orchestrator | 2026-04-13 03:08:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:08:12.275280 | orchestrator | 2026-04-13 03:08:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:08:12.276653 | orchestrator | 2026-04-13 03:08:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:08:12.276782 | orchestrator | 2026-04-13 03:08:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:08:15.324287 | orchestrator | 2026-04-13 03:08:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:08:15.325561 | orchestrator | 2026-04-13 03:08:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:08:15.325614 | orchestrator | 2026-04-13 03:08:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:08:18.375434 | orchestrator | 2026-04-13 03:08:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:08:18.377722 | orchestrator | 2026-04-13 03:08:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:08:18.377792 | orchestrator | 2026-04-13 03:08:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:08:21.423584 | orchestrator | 2026-04-13 03:08:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:08:21.425309 | orchestrator | 2026-04-13 03:08:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:08:21.425418 | orchestrator | 2026-04-13 03:08:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:08:24.480270 | orchestrator | 2026-04-13 03:08:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:08:24.481681 | orchestrator | 2026-04-13 03:08:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:08:24.481741 | orchestrator | 2026-04-13 03:08:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:08:27.540670 | orchestrator | 2026-04-13 03:08:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:08:27.542885 | orchestrator | 2026-04-13 03:08:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:08:27.542943 | orchestrator | 2026-04-13 03:08:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:08:30.591436 | orchestrator | 2026-04-13 03:08:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:08:30.592079 | orchestrator | 2026-04-13 03:08:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:08:30.592152 | orchestrator | 2026-04-13 03:08:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:08:33.642980 | orchestrator | 2026-04-13 03:08:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:08:33.643974 | orchestrator | 2026-04-13 03:08:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:08:33.644038 | orchestrator | 2026-04-13 03:08:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:08:36.694685 | orchestrator | 2026-04-13 03:08:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:08:36.696203 | orchestrator | 2026-04-13 03:08:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:08:36.696244 | orchestrator | 2026-04-13 03:08:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:08:39.748022 | orchestrator | 2026-04-13 03:08:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:08:39.749378 | orchestrator | 2026-04-13 03:08:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:08:39.749428 | orchestrator | 2026-04-13 03:08:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:08:42.802124 | orchestrator | 2026-04-13 03:08:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:08:42.803708 | orchestrator | 2026-04-13 03:08:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:08:42.803775 | orchestrator | 2026-04-13 03:08:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:08:45.857222 | orchestrator | 2026-04-13 03:08:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:08:45.857651 | orchestrator | 2026-04-13 03:08:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:08:45.857685 | orchestrator | 2026-04-13 03:08:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:08:48.912729 | orchestrator | 2026-04-13 03:08:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:08:48.913415 | orchestrator | 2026-04-13 03:08:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:08:48.914746 | orchestrator | 2026-04-13 03:08:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:08:51.953934 | orchestrator | 2026-04-13 03:08:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:08:51.955525 | orchestrator | 2026-04-13 03:08:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:08:51.955588 | orchestrator | 2026-04-13 03:08:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:08:55.010195 | orchestrator | 2026-04-13 03:08:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:08:55.011452 | orchestrator | 2026-04-13 03:08:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:08:55.011521 | orchestrator | 2026-04-13 03:08:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:08:58.067986 | orchestrator | 2026-04-13 03:08:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:08:58.070203 | orchestrator | 2026-04-13 03:08:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:08:58.070340 | orchestrator | 2026-04-13 03:08:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:09:01.120235 | orchestrator | 2026-04-13 03:09:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:09:01.121419 | orchestrator | 2026-04-13 03:09:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:09:01.121455 | orchestrator | 2026-04-13 03:09:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:09:04.174472 | orchestrator | 2026-04-13 03:09:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:09:04.175602 | orchestrator | 2026-04-13 03:09:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:09:04.175781 | orchestrator | 2026-04-13 03:09:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:09:07.224365 | orchestrator | 2026-04-13 03:09:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:09:07.225256 | orchestrator | 2026-04-13 03:09:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:09:07.225816 | orchestrator | 2026-04-13 03:09:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:09:10.273853 | orchestrator | 2026-04-13 03:09:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:09:10.275312 | orchestrator | 2026-04-13 03:09:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:09:10.275387 | orchestrator | 2026-04-13 03:09:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:09:13.318265 | orchestrator | 2026-04-13 03:09:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:09:13.319879 | orchestrator | 2026-04-13 03:09:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:09:13.319947 | orchestrator | 2026-04-13 03:09:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:09:16.365462 | orchestrator | 2026-04-13 03:09:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:09:16.370799 | orchestrator | 2026-04-13 03:09:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:09:16.370873 | orchestrator | 2026-04-13 03:09:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:09:19.418972 | orchestrator | 2026-04-13 03:09:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:09:19.420666 | orchestrator | 2026-04-13 03:09:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:09:19.420802 | orchestrator | 2026-04-13 03:09:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:09:22.472423 | orchestrator | 2026-04-13 03:09:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:09:22.474358 | orchestrator | 2026-04-13 03:09:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:09:22.474411 | orchestrator | 2026-04-13 03:09:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:09:25.523499 | orchestrator | 2026-04-13 03:09:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:09:25.523917 | orchestrator | 2026-04-13 03:09:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:09:25.523945 | orchestrator | 2026-04-13 03:09:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:09:28.575086 | orchestrator | 2026-04-13 03:09:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:09:28.576367 | orchestrator | 2026-04-13 03:09:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:09:28.576419 | orchestrator | 2026-04-13 03:09:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:09:31.617885 | orchestrator | 2026-04-13 03:09:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:09:31.620242 | orchestrator | 2026-04-13 03:09:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:09:31.620351 | orchestrator | 2026-04-13 03:09:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:09:34.669484 | orchestrator | 2026-04-13 03:09:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:09:34.671356 | orchestrator | 2026-04-13 03:09:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:09:34.671403 | orchestrator | 2026-04-13 03:09:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:09:37.725108 | orchestrator | 2026-04-13 03:09:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:09:37.727568 | orchestrator | 2026-04-13 03:09:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:09:37.727608 | orchestrator | 2026-04-13 03:09:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:09:40.783630 | orchestrator | 2026-04-13 03:09:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:09:40.784900 | orchestrator | 2026-04-13 03:09:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:09:40.784934 | orchestrator | 2026-04-13 03:09:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:09:43.833552 | orchestrator | 2026-04-13 03:09:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:09:43.834637 | orchestrator | 2026-04-13 03:09:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:09:43.834758 | orchestrator | 2026-04-13 03:09:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:09:46.887907 | orchestrator | 2026-04-13 03:09:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:09:46.889519 | orchestrator | 2026-04-13 03:09:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:09:46.889574 | orchestrator | 2026-04-13 03:09:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:09:49.937961 | orchestrator | 2026-04-13 03:09:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:09:49.938650 | orchestrator | 2026-04-13 03:09:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:09:49.938684 | orchestrator | 2026-04-13 03:09:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:09:52.987570 | orchestrator | 2026-04-13 03:09:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:09:52.990150 | orchestrator | 2026-04-13 03:09:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:09:52.990207 | orchestrator | 2026-04-13 03:09:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:09:56.044904 | orchestrator | 2026-04-13 03:09:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:09:56.046534 | orchestrator | 2026-04-13 03:09:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:09:56.046589 | orchestrator | 2026-04-13 03:09:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:09:59.094576 | orchestrator | 2026-04-13 03:09:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:09:59.094738 | orchestrator | 2026-04-13 03:09:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:09:59.094755 | orchestrator | 2026-04-13 03:09:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:10:02.142694 | orchestrator | 2026-04-13 03:10:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:10:02.143333 | orchestrator | 2026-04-13 03:10:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:10:02.143373 | orchestrator | 2026-04-13 03:10:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:10:05.189985 | orchestrator | 2026-04-13 03:10:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:10:05.191779 | orchestrator | 2026-04-13 03:10:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:10:05.192050 | orchestrator | 2026-04-13 03:10:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:10:08.245380 | orchestrator | 2026-04-13 03:10:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:10:08.247383 | orchestrator | 2026-04-13 03:10:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:10:08.247531 | orchestrator | 2026-04-13 03:10:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:10:11.300235 | orchestrator | 2026-04-13 03:10:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:10:11.301942 | orchestrator | 2026-04-13 03:10:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:10:11.301978 | orchestrator | 2026-04-13 03:10:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:10:14.343853 | orchestrator | 2026-04-13 03:10:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:10:14.345241 | orchestrator | 2026-04-13 03:10:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:10:14.345319 | orchestrator | 2026-04-13 03:10:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:10:17.396085 | orchestrator | 2026-04-13 03:10:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:10:17.398232 | orchestrator | 2026-04-13 03:10:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:10:17.398295 | orchestrator | 2026-04-13 03:10:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:10:20.452145 | orchestrator | 2026-04-13 03:10:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:10:20.453045 | orchestrator | 2026-04-13 03:10:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:10:20.453082 | orchestrator | 2026-04-13 03:10:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:10:23.498090 | orchestrator | 2026-04-13 03:10:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:10:23.498799 | orchestrator | 2026-04-13 03:10:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:10:23.498832 | orchestrator | 2026-04-13 03:10:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:10:26.556720 | orchestrator | 2026-04-13 03:10:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:10:26.557753 | orchestrator | 2026-04-13 03:10:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:10:26.557783 | orchestrator | 2026-04-13 03:10:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:10:29.613617 | orchestrator | 2026-04-13 03:10:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:10:29.616646 | orchestrator | 2026-04-13 03:10:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:10:29.616711 | orchestrator | 2026-04-13 03:10:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:10:32.676657 | orchestrator | 2026-04-13 03:10:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:10:32.679280 | orchestrator | 2026-04-13 03:10:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:10:32.679358 | orchestrator | 2026-04-13 03:10:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:10:35.728942 | orchestrator | 2026-04-13 03:10:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:10:35.730289 | orchestrator | 2026-04-13 03:10:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:10:35.730336 | orchestrator | 2026-04-13 03:10:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:10:38.781829 | orchestrator | 2026-04-13 03:10:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:10:38.782167 | orchestrator | 2026-04-13 03:10:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:10:38.782199 | orchestrator | 2026-04-13 03:10:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:10:41.840400 | orchestrator | 2026-04-13 03:10:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:10:41.841979 | orchestrator | 2026-04-13 03:10:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:10:41.842064 | orchestrator | 2026-04-13 03:10:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:10:44.895116 | orchestrator | 2026-04-13 03:10:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:10:44.899509 | orchestrator | 2026-04-13 03:10:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:10:44.899565 | orchestrator | 2026-04-13 03:10:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:10:47.955851 | orchestrator | 2026-04-13 03:10:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:10:47.958378 | orchestrator | 2026-04-13 03:10:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:10:47.958437 | orchestrator | 2026-04-13 03:10:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:10:51.014236 | orchestrator | 2026-04-13 03:10:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:10:51.015468 | orchestrator | 2026-04-13 03:10:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:10:51.015639 | orchestrator | 2026-04-13 03:10:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:10:54.061858 | orchestrator | 2026-04-13 03:10:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:10:54.063360 | orchestrator | 2026-04-13 03:10:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:10:54.063385 | orchestrator | 2026-04-13 03:10:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:10:57.122293 | orchestrator | 2026-04-13 03:10:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:10:57.123551 | orchestrator | 2026-04-13 03:10:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:10:57.123625 | orchestrator | 2026-04-13 03:10:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:11:00.164400 | orchestrator | 2026-04-13 03:11:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:11:00.165375 | orchestrator | 2026-04-13 03:11:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:11:00.165410 | orchestrator | 2026-04-13 03:11:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:11:03.209228 | orchestrator | 2026-04-13 03:11:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:11:03.210379 | orchestrator | 2026-04-13 03:11:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:11:03.210428 | orchestrator | 2026-04-13 03:11:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:11:06.253766 | orchestrator | 2026-04-13 03:11:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:11:06.254880 | orchestrator | 2026-04-13 03:11:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:11:06.254927 | orchestrator | 2026-04-13 03:11:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:11:09.307816 | orchestrator | 2026-04-13 03:11:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:11:09.311011 | orchestrator | 2026-04-13 03:11:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:11:09.311220 | orchestrator | 2026-04-13 03:11:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:11:12.363469 | orchestrator | 2026-04-13 03:11:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:11:12.366080 | orchestrator | 2026-04-13 03:11:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:11:12.366129 | orchestrator | 2026-04-13 03:11:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:11:15.415020 | orchestrator | 2026-04-13 03:11:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:11:15.416590 | orchestrator | 2026-04-13 03:11:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:11:15.416683 | orchestrator | 2026-04-13 03:11:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:11:18.468590 | orchestrator | 2026-04-13 03:11:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:11:18.471176 | orchestrator | 2026-04-13 03:11:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:11:18.471265 | orchestrator | 2026-04-13 03:11:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:11:21.521012 | orchestrator | 2026-04-13 03:11:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:11:21.523415 | orchestrator | 2026-04-13 03:11:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:11:21.523514 | orchestrator | 2026-04-13 03:11:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:11:24.574549 | orchestrator | 2026-04-13 03:11:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:11:24.576142 | orchestrator | 2026-04-13 03:11:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:11:24.576171 | orchestrator | 2026-04-13 03:11:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:11:27.624951 | orchestrator | 2026-04-13 03:11:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:11:27.628009 | orchestrator | 2026-04-13 03:11:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:11:27.628087 | orchestrator | 2026-04-13 03:11:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:11:30.676603 | orchestrator | 2026-04-13 03:11:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:11:30.677342 | orchestrator | 2026-04-13 03:11:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:11:30.677387 | orchestrator | 2026-04-13 03:11:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:11:33.731480 | orchestrator | 2026-04-13 03:11:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:11:33.733489 | orchestrator | 2026-04-13 03:11:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:11:33.733559 | orchestrator | 2026-04-13 03:11:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:11:36.785724 | orchestrator | 2026-04-13 03:11:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:11:36.787370 | orchestrator | 2026-04-13 03:11:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:11:36.787427 | orchestrator | 2026-04-13 03:11:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:11:39.832362 | orchestrator | 2026-04-13 03:11:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:11:39.834200 | orchestrator | 2026-04-13 03:11:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:11:39.834394 | orchestrator | 2026-04-13 03:11:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:11:42.877863 | orchestrator | 2026-04-13 03:11:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:11:42.879458 | orchestrator | 2026-04-13 03:11:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:11:42.879499 | orchestrator | 2026-04-13 03:11:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:11:45.914787 | orchestrator | 2026-04-13 03:11:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:11:45.916396 | orchestrator | 2026-04-13 03:11:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:11:45.916424 | orchestrator | 2026-04-13 03:11:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:11:48.964111 | orchestrator | 2026-04-13 03:11:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:11:48.965850 | orchestrator | 2026-04-13 03:11:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:11:48.965901 | orchestrator | 2026-04-13 03:11:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:11:52.025630 | orchestrator | 2026-04-13 03:11:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:11:52.027432 | orchestrator | 2026-04-13 03:11:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:11:52.027485 | orchestrator | 2026-04-13 03:11:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:11:55.070530 | orchestrator | 2026-04-13 03:11:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:11:55.073532 | orchestrator | 2026-04-13 03:11:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:11:55.073599 | orchestrator | 2026-04-13 03:11:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:11:58.129986 | orchestrator | 2026-04-13 03:11:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:11:58.131663 | orchestrator | 2026-04-13 03:11:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:11:58.131704 | orchestrator | 2026-04-13 03:11:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:12:01.185547 | orchestrator | 2026-04-13 03:12:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:12:01.189040 | orchestrator | 2026-04-13 03:12:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:12:01.189108 | orchestrator | 2026-04-13 03:12:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:12:04.241425 | orchestrator | 2026-04-13 03:12:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:12:04.242948 | orchestrator | 2026-04-13 03:12:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:12:04.243068 | orchestrator | 2026-04-13 03:12:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:12:07.284381 | orchestrator | 2026-04-13 03:12:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:12:07.284774 | orchestrator | 2026-04-13 03:12:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:12:07.285067 | orchestrator | 2026-04-13 03:12:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:12:10.325348 | orchestrator | 2026-04-13 03:12:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:12:10.327485 | orchestrator | 2026-04-13 03:12:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:12:10.327627 | orchestrator | 2026-04-13 03:12:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:12:13.374824 | orchestrator | 2026-04-13 03:12:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:12:13.376551 | orchestrator | 2026-04-13 03:12:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:12:13.376752 | orchestrator | 2026-04-13 03:12:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:12:16.428075 | orchestrator | 2026-04-13 03:12:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:12:16.429359 | orchestrator | 2026-04-13 03:12:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:12:16.429528 | orchestrator | 2026-04-13 03:12:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:12:19.482834 | orchestrator | 2026-04-13 03:12:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:12:19.484130 | orchestrator | 2026-04-13 03:12:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:12:19.484187 | orchestrator | 2026-04-13 03:12:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:12:22.537079 | orchestrator | 2026-04-13 03:12:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:12:22.537996 | orchestrator | 2026-04-13 03:12:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:12:22.538090 | orchestrator | 2026-04-13 03:12:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:12:25.586798 | orchestrator | 2026-04-13 03:12:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:12:25.589316 | orchestrator | 2026-04-13 03:12:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:12:25.589375 | orchestrator | 2026-04-13 03:12:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:12:28.634742 | orchestrator | 2026-04-13 03:12:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:12:28.636348 | orchestrator | 2026-04-13 03:12:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:12:28.636432 | orchestrator | 2026-04-13 03:12:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:12:31.692069 | orchestrator | 2026-04-13 03:12:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:12:31.695264 | orchestrator | 2026-04-13 03:12:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:12:31.695332 | orchestrator | 2026-04-13 03:12:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:12:34.749326 | orchestrator | 2026-04-13 03:12:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:12:34.751613 | orchestrator | 2026-04-13 03:12:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:12:34.751710 | orchestrator | 2026-04-13 03:12:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:12:37.804810 | orchestrator | 2026-04-13 03:12:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:12:37.806921 | orchestrator | 2026-04-13 03:12:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:12:37.806968 | orchestrator | 2026-04-13 03:12:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:12:40.849882 | orchestrator | 2026-04-13 03:12:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:12:40.852018 | orchestrator | 2026-04-13 03:12:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:12:40.852510 | orchestrator | 2026-04-13 03:12:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:12:43.902112 | orchestrator | 2026-04-13 03:12:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:12:43.903780 | orchestrator | 2026-04-13 03:12:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:12:43.903825 | orchestrator | 2026-04-13 03:12:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:12:46.949845 | orchestrator | 2026-04-13 03:12:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:12:46.950190 | orchestrator | 2026-04-13 03:12:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:12:46.950257 | orchestrator | 2026-04-13 03:12:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:12:50.012003 | orchestrator | 2026-04-13 03:12:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:12:50.015520 | orchestrator | 2026-04-13 03:12:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:12:50.015590 | orchestrator | 2026-04-13 03:12:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:12:53.053585 | orchestrator | 2026-04-13 03:12:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:12:53.054870 | orchestrator | 2026-04-13 03:12:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:12:53.054934 | orchestrator | 2026-04-13 03:12:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:12:56.095348 | orchestrator | 2026-04-13 03:12:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:12:56.097302 | orchestrator | 2026-04-13 03:12:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:12:56.097353 | orchestrator | 2026-04-13 03:12:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:12:59.138983 | orchestrator | 2026-04-13 03:12:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:12:59.144077 | orchestrator | 2026-04-13 03:12:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:12:59.144147 | orchestrator | 2026-04-13 03:12:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:13:02.188695 | orchestrator | 2026-04-13 03:13:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:13:02.190324 | orchestrator | 2026-04-13 03:13:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:13:02.190358 | orchestrator | 2026-04-13 03:13:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:13:05.247604 | orchestrator | 2026-04-13 03:13:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:13:05.249693 | orchestrator | 2026-04-13 03:13:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:13:05.249750 | orchestrator | 2026-04-13 03:13:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:13:08.303791 | orchestrator | 2026-04-13 03:13:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:13:08.305494 | orchestrator | 2026-04-13 03:13:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:13:08.305572 | orchestrator | 2026-04-13 03:13:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:13:11.348885 | orchestrator | 2026-04-13 03:13:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:13:11.350547 | orchestrator | 2026-04-13 03:13:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:13:11.350591 | orchestrator | 2026-04-13 03:13:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:13:14.408002 | orchestrator | 2026-04-13 03:13:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:13:14.409704 | orchestrator | 2026-04-13 03:13:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:13:14.409767 | orchestrator | 2026-04-13 03:13:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:13:17.467360 | orchestrator | 2026-04-13 03:13:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:13:17.468706 | orchestrator | 2026-04-13 03:13:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:13:17.468795 | orchestrator | 2026-04-13 03:13:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:13:20.521743 | orchestrator | 2026-04-13 03:13:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:13:20.525085 | orchestrator | 2026-04-13 03:13:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:13:20.525484 | orchestrator | 2026-04-13 03:13:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:13:23.578153 | orchestrator | 2026-04-13 03:13:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:13:23.579744 | orchestrator | 2026-04-13 03:13:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:13:23.579802 | orchestrator | 2026-04-13 03:13:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:13:26.623610 | orchestrator | 2026-04-13 03:13:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:13:26.625735 | orchestrator | 2026-04-13 03:13:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:13:26.625782 | orchestrator | 2026-04-13 03:13:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:13:29.674897 | orchestrator | 2026-04-13 03:13:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:13:29.676778 | orchestrator | 2026-04-13 03:13:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:13:29.676824 | orchestrator | 2026-04-13 03:13:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:13:32.731844 | orchestrator | 2026-04-13 03:13:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:13:32.732761 | orchestrator | 2026-04-13 03:13:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:13:32.732796 | orchestrator | 2026-04-13 03:13:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:13:35.778004 | orchestrator | 2026-04-13 03:13:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:13:35.778965 | orchestrator | 2026-04-13 03:13:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:13:35.779107 | orchestrator | 2026-04-13 03:13:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:13:38.826821 | orchestrator | 2026-04-13 03:13:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:13:38.828709 | orchestrator | 2026-04-13 03:13:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:13:38.828768 | orchestrator | 2026-04-13 03:13:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:13:41.883943 | orchestrator | 2026-04-13 03:13:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:13:41.885976 | orchestrator | 2026-04-13 03:13:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:13:41.886122 | orchestrator | 2026-04-13 03:13:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:13:44.937430 | orchestrator | 2026-04-13 03:13:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:13:44.939631 | orchestrator | 2026-04-13 03:13:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:13:44.939684 | orchestrator | 2026-04-13 03:13:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:13:47.987422 | orchestrator | 2026-04-13 03:13:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:13:47.988901 | orchestrator | 2026-04-13 03:13:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:13:47.989105 | orchestrator | 2026-04-13 03:13:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:13:51.039435 | orchestrator | 2026-04-13 03:13:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:13:51.040511 | orchestrator | 2026-04-13 03:13:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:13:51.040767 | orchestrator | 2026-04-13 03:13:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:13:54.091602 | orchestrator | 2026-04-13 03:13:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:13:54.096546 | orchestrator | 2026-04-13 03:13:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:13:54.096626 | orchestrator | 2026-04-13 03:13:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:13:57.141817 | orchestrator | 2026-04-13 03:13:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:13:57.144420 | orchestrator | 2026-04-13 03:13:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:13:57.144472 | orchestrator | 2026-04-13 03:13:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:14:00.193152 | orchestrator | 2026-04-13 03:14:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:14:00.193481 | orchestrator | 2026-04-13 03:14:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:14:00.193802 | orchestrator | 2026-04-13 03:14:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:14:03.253290 | orchestrator | 2026-04-13 03:14:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:14:03.254922 | orchestrator | 2026-04-13 03:14:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:14:03.254978 | orchestrator | 2026-04-13 03:14:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:14:06.312416 | orchestrator | 2026-04-13 03:14:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:14:06.314649 | orchestrator | 2026-04-13 03:14:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:14:06.314715 | orchestrator | 2026-04-13 03:14:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:14:09.365322 | orchestrator | 2026-04-13 03:14:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:14:09.366553 | orchestrator | 2026-04-13 03:14:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:14:09.366598 | orchestrator | 2026-04-13 03:14:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:14:12.415955 | orchestrator | 2026-04-13 03:14:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:14:12.418889 | orchestrator | 2026-04-13 03:14:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:14:12.419165 | orchestrator | 2026-04-13 03:14:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:14:15.475127 | orchestrator | 2026-04-13 03:14:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:14:15.477110 | orchestrator | 2026-04-13 03:14:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:14:15.477195 | orchestrator | 2026-04-13 03:14:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:14:18.532905 | orchestrator | 2026-04-13 03:14:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:14:18.534530 | orchestrator | 2026-04-13 03:14:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:14:18.534582 | orchestrator | 2026-04-13 03:14:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:14:21.579608 | orchestrator | 2026-04-13 03:14:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:14:21.581637 | orchestrator | 2026-04-13 03:14:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:14:21.581721 | orchestrator | 2026-04-13 03:14:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:14:24.633687 | orchestrator | 2026-04-13 03:14:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:14:24.634854 | orchestrator | 2026-04-13 03:14:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:14:24.634948 | orchestrator | 2026-04-13 03:14:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:14:27.689704 | orchestrator | 2026-04-13 03:14:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:14:27.691759 | orchestrator | 2026-04-13 03:14:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:14:27.691819 | orchestrator | 2026-04-13 03:14:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:14:30.737909 | orchestrator | 2026-04-13 03:14:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:14:30.738010 | orchestrator | 2026-04-13 03:14:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:14:30.738092 | orchestrator | 2026-04-13 03:14:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:14:33.781930 | orchestrator | 2026-04-13 03:14:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:14:33.784377 | orchestrator | 2026-04-13 03:14:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:14:33.784416 | orchestrator | 2026-04-13 03:14:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:14:36.829433 | orchestrator | 2026-04-13 03:14:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:14:36.833057 | orchestrator | 2026-04-13 03:14:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:14:36.833149 | orchestrator | 2026-04-13 03:14:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:14:39.870779 | orchestrator | 2026-04-13 03:14:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:14:39.871417 | orchestrator | 2026-04-13 03:14:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:14:39.871450 | orchestrator | 2026-04-13 03:14:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:14:42.920344 | orchestrator | 2026-04-13 03:14:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:14:42.922588 | orchestrator | 2026-04-13 03:14:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:14:42.922666 | orchestrator | 2026-04-13 03:14:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:14:45.963396 | orchestrator | 2026-04-13 03:14:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:14:45.964981 | orchestrator | 2026-04-13 03:14:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:14:45.965035 | orchestrator | 2026-04-13 03:14:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:14:49.006531 | orchestrator | 2026-04-13 03:14:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:14:49.008064 | orchestrator | 2026-04-13 03:14:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:14:49.008093 | orchestrator | 2026-04-13 03:14:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:14:52.057334 | orchestrator | 2026-04-13 03:14:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:14:52.058425 | orchestrator | 2026-04-13 03:14:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:14:52.058505 | orchestrator | 2026-04-13 03:14:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:14:55.103639 | orchestrator | 2026-04-13 03:14:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:14:55.104092 | orchestrator | 2026-04-13 03:14:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:14:55.104109 | orchestrator | 2026-04-13 03:14:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:14:58.159911 | orchestrator | 2026-04-13 03:14:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:14:58.161134 | orchestrator | 2026-04-13 03:14:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:14:58.161214 | orchestrator | 2026-04-13 03:14:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:15:01.206560 | orchestrator | 2026-04-13 03:15:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:15:01.208330 | orchestrator | 2026-04-13 03:15:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:15:01.208419 | orchestrator | 2026-04-13 03:15:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:15:04.261404 | orchestrator | 2026-04-13 03:15:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:15:04.262584 | orchestrator | 2026-04-13 03:15:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:15:04.262619 | orchestrator | 2026-04-13 03:15:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:15:07.312211 | orchestrator | 2026-04-13 03:15:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:15:07.313044 | orchestrator | 2026-04-13 03:15:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:15:07.313149 | orchestrator | 2026-04-13 03:15:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:15:10.378695 | orchestrator | 2026-04-13 03:15:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:15:10.378767 | orchestrator | 2026-04-13 03:15:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:15:10.378773 | orchestrator | 2026-04-13 03:15:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:15:13.437493 | orchestrator | 2026-04-13 03:15:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:15:13.439516 | orchestrator | 2026-04-13 03:15:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:15:13.439569 | orchestrator | 2026-04-13 03:15:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:15:16.494306 | orchestrator | 2026-04-13 03:15:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:15:16.496014 | orchestrator | 2026-04-13 03:15:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:15:16.496047 | orchestrator | 2026-04-13 03:15:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:15:19.542659 | orchestrator | 2026-04-13 03:15:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:15:19.543836 | orchestrator | 2026-04-13 03:15:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:15:19.543896 | orchestrator | 2026-04-13 03:15:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:15:22.606919 | orchestrator | 2026-04-13 03:15:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:15:22.607369 | orchestrator | 2026-04-13 03:15:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:15:22.607406 | orchestrator | 2026-04-13 03:15:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:15:25.669939 | orchestrator | 2026-04-13 03:15:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:15:25.671485 | orchestrator | 2026-04-13 03:15:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:15:25.671512 | orchestrator | 2026-04-13 03:15:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:15:28.725337 | orchestrator | 2026-04-13 03:15:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:15:28.728288 | orchestrator | 2026-04-13 03:15:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:15:28.728412 | orchestrator | 2026-04-13 03:15:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:15:31.786102 | orchestrator | 2026-04-13 03:15:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:15:31.789873 | orchestrator | 2026-04-13 03:15:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:15:31.789966 | orchestrator | 2026-04-13 03:15:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:15:34.844603 | orchestrator | 2026-04-13 03:15:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:15:34.847968 | orchestrator | 2026-04-13 03:15:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:15:34.848016 | orchestrator | 2026-04-13 03:15:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:15:37.921539 | orchestrator | 2026-04-13 03:15:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:15:37.921603 | orchestrator | 2026-04-13 03:15:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:15:37.921609 | orchestrator | 2026-04-13 03:15:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:15:40.975012 | orchestrator | 2026-04-13 03:15:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:15:40.975201 | orchestrator | 2026-04-13 03:15:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:15:40.976074 | orchestrator | 2026-04-13 03:15:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:15:44.026076 | orchestrator | 2026-04-13 03:15:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:15:44.029866 | orchestrator | 2026-04-13 03:15:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:15:44.029957 | orchestrator | 2026-04-13 03:15:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:15:47.091767 | orchestrator | 2026-04-13 03:15:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:15:47.093935 | orchestrator | 2026-04-13 03:15:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:15:47.093986 | orchestrator | 2026-04-13 03:15:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:15:50.158987 | orchestrator | 2026-04-13 03:15:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:15:50.159445 | orchestrator | 2026-04-13 03:15:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:15:50.159481 | orchestrator | 2026-04-13 03:15:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:15:53.217602 | orchestrator | 2026-04-13 03:15:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:15:53.218917 | orchestrator | 2026-04-13 03:15:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:15:53.218952 | orchestrator | 2026-04-13 03:15:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:15:56.287390 | orchestrator | 2026-04-13 03:15:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:15:56.289462 | orchestrator | 2026-04-13 03:15:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:15:56.289533 | orchestrator | 2026-04-13 03:15:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:15:59.343398 | orchestrator | 2026-04-13 03:15:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:15:59.345466 | orchestrator | 2026-04-13 03:15:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:15:59.345500 | orchestrator | 2026-04-13 03:15:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:16:02.400809 | orchestrator | 2026-04-13 03:16:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:16:02.404452 | orchestrator | 2026-04-13 03:16:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:16:02.404616 | orchestrator | 2026-04-13 03:16:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:16:05.462083 | orchestrator | 2026-04-13 03:16:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:16:05.463627 | orchestrator | 2026-04-13 03:16:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:16:05.463679 | orchestrator | 2026-04-13 03:16:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:16:08.516999 | orchestrator | 2026-04-13 03:16:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:16:08.518896 | orchestrator | 2026-04-13 03:16:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:16:08.518935 | orchestrator | 2026-04-13 03:16:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:16:11.567776 | orchestrator | 2026-04-13 03:16:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:16:11.569588 | orchestrator | 2026-04-13 03:16:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:16:11.569641 | orchestrator | 2026-04-13 03:16:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:16:14.628195 | orchestrator | 2026-04-13 03:16:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:16:14.630166 | orchestrator | 2026-04-13 03:16:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:16:14.630202 | orchestrator | 2026-04-13 03:16:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:16:17.688853 | orchestrator | 2026-04-13 03:16:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:16:17.690527 | orchestrator | 2026-04-13 03:16:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:16:17.690566 | orchestrator | 2026-04-13 03:16:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:16:20.744899 | orchestrator | 2026-04-13 03:16:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:16:20.747609 | orchestrator | 2026-04-13 03:16:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:16:20.747666 | orchestrator | 2026-04-13 03:16:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:16:23.812664 | orchestrator | 2026-04-13 03:16:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:16:23.815434 | orchestrator | 2026-04-13 03:16:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:16:23.815667 | orchestrator | 2026-04-13 03:16:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:16:26.873500 | orchestrator | 2026-04-13 03:16:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:16:26.875814 | orchestrator | 2026-04-13 03:16:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:16:26.875852 | orchestrator | 2026-04-13 03:16:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:16:29.939832 | orchestrator | 2026-04-13 03:16:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:16:29.942772 | orchestrator | 2026-04-13 03:16:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:16:29.942946 | orchestrator | 2026-04-13 03:16:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:16:32.999822 | orchestrator | 2026-04-13 03:16:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:16:33.001946 | orchestrator | 2026-04-13 03:16:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:16:33.002105 | orchestrator | 2026-04-13 03:16:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:16:36.052950 | orchestrator | 2026-04-13 03:16:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:16:36.055327 | orchestrator | 2026-04-13 03:16:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:16:36.055677 | orchestrator | 2026-04-13 03:16:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:16:39.118177 | orchestrator | 2026-04-13 03:16:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:16:39.118504 | orchestrator | 2026-04-13 03:16:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:16:39.118540 | orchestrator | 2026-04-13 03:16:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:16:42.168371 | orchestrator | 2026-04-13 03:16:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:16:42.169496 | orchestrator | 2026-04-13 03:16:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:16:42.169754 | orchestrator | 2026-04-13 03:16:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:16:45.228799 | orchestrator | 2026-04-13 03:16:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:16:45.230858 | orchestrator | 2026-04-13 03:16:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:16:45.230921 | orchestrator | 2026-04-13 03:16:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:16:48.275787 | orchestrator | 2026-04-13 03:16:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:16:48.278456 | orchestrator | 2026-04-13 03:16:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:16:48.278519 | orchestrator | 2026-04-13 03:16:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:16:51.333563 | orchestrator | 2026-04-13 03:16:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:16:51.337074 | orchestrator | 2026-04-13 03:16:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:16:51.337135 | orchestrator | 2026-04-13 03:16:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:16:54.392556 | orchestrator | 2026-04-13 03:16:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:16:54.394008 | orchestrator | 2026-04-13 03:16:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:16:54.394123 | orchestrator | 2026-04-13 03:16:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:16:57.443375 | orchestrator | 2026-04-13 03:16:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:16:57.444892 | orchestrator | 2026-04-13 03:16:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:16:57.444951 | orchestrator | 2026-04-13 03:16:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:17:00.490464 | orchestrator | 2026-04-13 03:17:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:17:00.491565 | orchestrator | 2026-04-13 03:17:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:17:00.491620 | orchestrator | 2026-04-13 03:17:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:17:03.550938 | orchestrator | 2026-04-13 03:17:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:17:03.553442 | orchestrator | 2026-04-13 03:17:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:17:03.553502 | orchestrator | 2026-04-13 03:17:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:17:06.606722 | orchestrator | 2026-04-13 03:17:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:17:06.608632 | orchestrator | 2026-04-13 03:17:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:17:06.608839 | orchestrator | 2026-04-13 03:17:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:17:09.667694 | orchestrator | 2026-04-13 03:17:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:17:09.670487 | orchestrator | 2026-04-13 03:17:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:17:09.670550 | orchestrator | 2026-04-13 03:17:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:17:12.718486 | orchestrator | 2026-04-13 03:17:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:17:12.719968 | orchestrator | 2026-04-13 03:17:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:17:12.720020 | orchestrator | 2026-04-13 03:17:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:17:15.772086 | orchestrator | 2026-04-13 03:17:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:17:15.773915 | orchestrator | 2026-04-13 03:17:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:17:15.773967 | orchestrator | 2026-04-13 03:17:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:17:18.828358 | orchestrator | 2026-04-13 03:17:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:17:18.830150 | orchestrator | 2026-04-13 03:17:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:17:18.830333 | orchestrator | 2026-04-13 03:17:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:17:21.881501 | orchestrator | 2026-04-13 03:17:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:17:21.883562 | orchestrator | 2026-04-13 03:17:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:17:21.883597 | orchestrator | 2026-04-13 03:17:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:17:24.939815 | orchestrator | 2026-04-13 03:17:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:17:24.944710 | orchestrator | 2026-04-13 03:17:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:17:24.944802 | orchestrator | 2026-04-13 03:17:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:17:27.991335 | orchestrator | 2026-04-13 03:17:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:17:27.991561 | orchestrator | 2026-04-13 03:17:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:17:27.991585 | orchestrator | 2026-04-13 03:17:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:17:31.042371 | orchestrator | 2026-04-13 03:17:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:17:31.044309 | orchestrator | 2026-04-13 03:17:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:17:31.044361 | orchestrator | 2026-04-13 03:17:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:17:34.090389 | orchestrator | 2026-04-13 03:17:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:17:34.093586 | orchestrator | 2026-04-13 03:17:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:17:34.093666 | orchestrator | 2026-04-13 03:17:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:17:37.161767 | orchestrator | 2026-04-13 03:17:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:17:37.163470 | orchestrator | 2026-04-13 03:17:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:17:37.163539 | orchestrator | 2026-04-13 03:17:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:17:40.216800 | orchestrator | 2026-04-13 03:17:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:17:40.220838 | orchestrator | 2026-04-13 03:17:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:17:40.221210 | orchestrator | 2026-04-13 03:17:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:17:43.266185 | orchestrator | 2026-04-13 03:17:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:17:43.268612 | orchestrator | 2026-04-13 03:17:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:17:43.268694 | orchestrator | 2026-04-13 03:17:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:17:46.322329 | orchestrator | 2026-04-13 03:17:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:17:46.324816 | orchestrator | 2026-04-13 03:17:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:17:46.324874 | orchestrator | 2026-04-13 03:17:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:17:49.378638 | orchestrator | 2026-04-13 03:17:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:17:49.381402 | orchestrator | 2026-04-13 03:17:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:17:49.381473 | orchestrator | 2026-04-13 03:17:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:17:52.431934 | orchestrator | 2026-04-13 03:17:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:17:52.434377 | orchestrator | 2026-04-13 03:17:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:17:52.434442 | orchestrator | 2026-04-13 03:17:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:17:55.490523 | orchestrator | 2026-04-13 03:17:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:17:55.494188 | orchestrator | 2026-04-13 03:17:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:17:55.494318 | orchestrator | 2026-04-13 03:17:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:17:58.540886 | orchestrator | 2026-04-13 03:17:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:17:58.543293 | orchestrator | 2026-04-13 03:17:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:17:58.543392 | orchestrator | 2026-04-13 03:17:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:18:01.592952 | orchestrator | 2026-04-13 03:18:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:18:01.595293 | orchestrator | 2026-04-13 03:18:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:18:01.595380 | orchestrator | 2026-04-13 03:18:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:18:04.651444 | orchestrator | 2026-04-13 03:18:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:18:04.653482 | orchestrator | 2026-04-13 03:18:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:18:04.653546 | orchestrator | 2026-04-13 03:18:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:18:07.710727 | orchestrator | 2026-04-13 03:18:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:18:07.712342 | orchestrator | 2026-04-13 03:18:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:18:07.712374 | orchestrator | 2026-04-13 03:18:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:18:10.771849 | orchestrator | 2026-04-13 03:18:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:18:10.772378 | orchestrator | 2026-04-13 03:18:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:18:10.772410 | orchestrator | 2026-04-13 03:18:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:18:13.823874 | orchestrator | 2026-04-13 03:18:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:18:13.825153 | orchestrator | 2026-04-13 03:18:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:18:13.825189 | orchestrator | 2026-04-13 03:18:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:18:16.875834 | orchestrator | 2026-04-13 03:18:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:18:16.877774 | orchestrator | 2026-04-13 03:18:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:18:16.877832 | orchestrator | 2026-04-13 03:18:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:18:19.926811 | orchestrator | 2026-04-13 03:18:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:18:19.927001 | orchestrator | 2026-04-13 03:18:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:18:19.927024 | orchestrator | 2026-04-13 03:18:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:18:22.972586 | orchestrator | 2026-04-13 03:18:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:18:22.975598 | orchestrator | 2026-04-13 03:18:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:18:22.975661 | orchestrator | 2026-04-13 03:18:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:18:26.035574 | orchestrator | 2026-04-13 03:18:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:18:26.038286 | orchestrator | 2026-04-13 03:18:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:18:26.038427 | orchestrator | 2026-04-13 03:18:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:18:29.084520 | orchestrator | 2026-04-13 03:18:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:18:29.086685 | orchestrator | 2026-04-13 03:18:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:18:29.086715 | orchestrator | 2026-04-13 03:18:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:18:32.133674 | orchestrator | 2026-04-13 03:18:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:18:32.135370 | orchestrator | 2026-04-13 03:18:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:18:32.135410 | orchestrator | 2026-04-13 03:18:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:18:35.186355 | orchestrator | 2026-04-13 03:18:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:18:35.189336 | orchestrator | 2026-04-13 03:18:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:18:35.189486 | orchestrator | 2026-04-13 03:18:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:18:38.248694 | orchestrator | 2026-04-13 03:18:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:18:38.251098 | orchestrator | 2026-04-13 03:18:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:18:38.251150 | orchestrator | 2026-04-13 03:18:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:18:41.301142 | orchestrator | 2026-04-13 03:18:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:18:41.301387 | orchestrator | 2026-04-13 03:18:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:18:41.301411 | orchestrator | 2026-04-13 03:18:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:18:44.349794 | orchestrator | 2026-04-13 03:18:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:18:44.352488 | orchestrator | 2026-04-13 03:18:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:18:44.352568 | orchestrator | 2026-04-13 03:18:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:18:47.400148 | orchestrator | 2026-04-13 03:18:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:18:47.400324 | orchestrator | 2026-04-13 03:18:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:18:47.400341 | orchestrator | 2026-04-13 03:18:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:18:50.447568 | orchestrator | 2026-04-13 03:18:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:18:50.448767 | orchestrator | 2026-04-13 03:18:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:18:50.448820 | orchestrator | 2026-04-13 03:18:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:18:53.496654 | orchestrator | 2026-04-13 03:18:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:18:53.499002 | orchestrator | 2026-04-13 03:18:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:18:53.499050 | orchestrator | 2026-04-13 03:18:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:18:56.552806 | orchestrator | 2026-04-13 03:18:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:18:56.554005 | orchestrator | 2026-04-13 03:18:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:18:56.554165 | orchestrator | 2026-04-13 03:18:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:18:59.603992 | orchestrator | 2026-04-13 03:18:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:18:59.606923 | orchestrator | 2026-04-13 03:18:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:18:59.606975 | orchestrator | 2026-04-13 03:18:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:19:02.661874 | orchestrator | 2026-04-13 03:19:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:19:02.663150 | orchestrator | 2026-04-13 03:19:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:19:02.663258 | orchestrator | 2026-04-13 03:19:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:19:05.716854 | orchestrator | 2026-04-13 03:19:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:19:05.717409 | orchestrator | 2026-04-13 03:19:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:19:05.717445 | orchestrator | 2026-04-13 03:19:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:19:08.771640 | orchestrator | 2026-04-13 03:19:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:19:08.772403 | orchestrator | 2026-04-13 03:19:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:19:08.772493 | orchestrator | 2026-04-13 03:19:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:19:11.817715 | orchestrator | 2026-04-13 03:19:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:19:11.819854 | orchestrator | 2026-04-13 03:19:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:19:11.820010 | orchestrator | 2026-04-13 03:19:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:19:14.878189 | orchestrator | 2026-04-13 03:19:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:19:14.880629 | orchestrator | 2026-04-13 03:19:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:19:14.880707 | orchestrator | 2026-04-13 03:19:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:19:17.932722 | orchestrator | 2026-04-13 03:19:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:19:17.934815 | orchestrator | 2026-04-13 03:19:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:19:17.935273 | orchestrator | 2026-04-13 03:19:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:19:20.991647 | orchestrator | 2026-04-13 03:19:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:19:20.992104 | orchestrator | 2026-04-13 03:19:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:19:20.992268 | orchestrator | 2026-04-13 03:19:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:19:24.048173 | orchestrator | 2026-04-13 03:19:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:19:24.050314 | orchestrator | 2026-04-13 03:19:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:19:24.050375 | orchestrator | 2026-04-13 03:19:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:19:27.106881 | orchestrator | 2026-04-13 03:19:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:19:27.107780 | orchestrator | 2026-04-13 03:19:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:19:27.107811 | orchestrator | 2026-04-13 03:19:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:19:30.157179 | orchestrator | 2026-04-13 03:19:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:19:30.158948 | orchestrator | 2026-04-13 03:19:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:19:30.158990 | orchestrator | 2026-04-13 03:19:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:19:33.207682 | orchestrator | 2026-04-13 03:19:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:19:33.209114 | orchestrator | 2026-04-13 03:19:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:19:33.209149 | orchestrator | 2026-04-13 03:19:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:19:36.256091 | orchestrator | 2026-04-13 03:19:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:19:36.256851 | orchestrator | 2026-04-13 03:19:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:19:36.256901 | orchestrator | 2026-04-13 03:19:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:19:39.304401 | orchestrator | 2026-04-13 03:19:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:19:39.306394 | orchestrator | 2026-04-13 03:19:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:19:39.306458 | orchestrator | 2026-04-13 03:19:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:19:42.352814 | orchestrator | 2026-04-13 03:19:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:19:42.353811 | orchestrator | 2026-04-13 03:19:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:19:42.353858 | orchestrator | 2026-04-13 03:19:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:19:45.407760 | orchestrator | 2026-04-13 03:19:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:19:45.410573 | orchestrator | 2026-04-13 03:19:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:19:45.410637 | orchestrator | 2026-04-13 03:19:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:19:48.464492 | orchestrator | 2026-04-13 03:19:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:19:48.466431 | orchestrator | 2026-04-13 03:19:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:19:48.466495 | orchestrator | 2026-04-13 03:19:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:19:51.516280 | orchestrator | 2026-04-13 03:19:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:19:51.520216 | orchestrator | 2026-04-13 03:19:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:19:51.520405 | orchestrator | 2026-04-13 03:19:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:19:54.577421 | orchestrator | 2026-04-13 03:19:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:19:54.582182 | orchestrator | 2026-04-13 03:19:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:19:54.582316 | orchestrator | 2026-04-13 03:19:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:19:57.634149 | orchestrator | 2026-04-13 03:19:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:19:57.634458 | orchestrator | 2026-04-13 03:19:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:19:57.634495 | orchestrator | 2026-04-13 03:19:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:20:00.684671 | orchestrator | 2026-04-13 03:20:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:20:00.687203 | orchestrator | 2026-04-13 03:20:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:20:00.687287 | orchestrator | 2026-04-13 03:20:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:20:03.738537 | orchestrator | 2026-04-13 03:20:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:20:03.740193 | orchestrator | 2026-04-13 03:20:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:20:03.740289 | orchestrator | 2026-04-13 03:20:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:20:06.794722 | orchestrator | 2026-04-13 03:20:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:20:06.796559 | orchestrator | 2026-04-13 03:20:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:20:06.796615 | orchestrator | 2026-04-13 03:20:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:20:09.845625 | orchestrator | 2026-04-13 03:20:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:20:09.847202 | orchestrator | 2026-04-13 03:20:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:20:09.847320 | orchestrator | 2026-04-13 03:20:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:20:12.895433 | orchestrator | 2026-04-13 03:20:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:20:12.897661 | orchestrator | 2026-04-13 03:20:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:20:12.897717 | orchestrator | 2026-04-13 03:20:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:20:15.944278 | orchestrator | 2026-04-13 03:20:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:20:15.945100 | orchestrator | 2026-04-13 03:20:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:20:15.945145 | orchestrator | 2026-04-13 03:20:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:20:19.006365 | orchestrator | 2026-04-13 03:20:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:20:19.008712 | orchestrator | 2026-04-13 03:20:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:20:19.008791 | orchestrator | 2026-04-13 03:20:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:20:22.056899 | orchestrator | 2026-04-13 03:20:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:20:22.058008 | orchestrator | 2026-04-13 03:20:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:20:22.058192 | orchestrator | 2026-04-13 03:20:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:20:25.110560 | orchestrator | 2026-04-13 03:20:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:20:25.112307 | orchestrator | 2026-04-13 03:20:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:20:25.112526 | orchestrator | 2026-04-13 03:20:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:20:28.169560 | orchestrator | 2026-04-13 03:20:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:20:28.172006 | orchestrator | 2026-04-13 03:20:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:20:28.172075 | orchestrator | 2026-04-13 03:20:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:20:31.221524 | orchestrator | 2026-04-13 03:20:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:20:31.223423 | orchestrator | 2026-04-13 03:20:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:20:31.223478 | orchestrator | 2026-04-13 03:20:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:20:34.282525 | orchestrator | 2026-04-13 03:20:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:20:34.284727 | orchestrator | 2026-04-13 03:20:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:20:34.284841 | orchestrator | 2026-04-13 03:20:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:20:37.334393 | orchestrator | 2026-04-13 03:20:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:20:37.335640 | orchestrator | 2026-04-13 03:20:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:20:37.335699 | orchestrator | 2026-04-13 03:20:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:20:40.387213 | orchestrator | 2026-04-13 03:20:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:20:40.389024 | orchestrator | 2026-04-13 03:20:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:20:40.389068 | orchestrator | 2026-04-13 03:20:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:20:43.433702 | orchestrator | 2026-04-13 03:20:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:20:43.434113 | orchestrator | 2026-04-13 03:20:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:20:43.434668 | orchestrator | 2026-04-13 03:20:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:20:46.479895 | orchestrator | 2026-04-13 03:20:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:20:46.482366 | orchestrator | 2026-04-13 03:20:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:20:46.482934 | orchestrator | 2026-04-13 03:20:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:20:49.527307 | orchestrator | 2026-04-13 03:20:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:20:49.528334 | orchestrator | 2026-04-13 03:20:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:20:49.528368 | orchestrator | 2026-04-13 03:20:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:20:52.578433 | orchestrator | 2026-04-13 03:20:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:20:52.580815 | orchestrator | 2026-04-13 03:20:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:20:52.580881 | orchestrator | 2026-04-13 03:20:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:20:55.637020 | orchestrator | 2026-04-13 03:20:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:20:55.641754 | orchestrator | 2026-04-13 03:20:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:20:55.641915 | orchestrator | 2026-04-13 03:20:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:20:58.698820 | orchestrator | 2026-04-13 03:20:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:20:58.699874 | orchestrator | 2026-04-13 03:20:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:20:58.699909 | orchestrator | 2026-04-13 03:20:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:21:01.749374 | orchestrator | 2026-04-13 03:21:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:21:01.751117 | orchestrator | 2026-04-13 03:21:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:21:01.751168 | orchestrator | 2026-04-13 03:21:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:21:04.808774 | orchestrator | 2026-04-13 03:21:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:21:04.811100 | orchestrator | 2026-04-13 03:21:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:21:04.811135 | orchestrator | 2026-04-13 03:21:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:21:07.865091 | orchestrator | 2026-04-13 03:21:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:21:07.867716 | orchestrator | 2026-04-13 03:21:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:21:07.867799 | orchestrator | 2026-04-13 03:21:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:21:10.917730 | orchestrator | 2026-04-13 03:21:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:21:10.919410 | orchestrator | 2026-04-13 03:21:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:21:10.919509 | orchestrator | 2026-04-13 03:21:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:21:13.968910 | orchestrator | 2026-04-13 03:21:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:21:13.969862 | orchestrator | 2026-04-13 03:21:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:21:13.969901 | orchestrator | 2026-04-13 03:21:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:21:17.024835 | orchestrator | 2026-04-13 03:21:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:21:17.026863 | orchestrator | 2026-04-13 03:21:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:21:17.026935 | orchestrator | 2026-04-13 03:21:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:21:20.084115 | orchestrator | 2026-04-13 03:21:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:21:20.084836 | orchestrator | 2026-04-13 03:21:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:21:20.084863 | orchestrator | 2026-04-13 03:21:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:21:23.139426 | orchestrator | 2026-04-13 03:21:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:21:23.140112 | orchestrator | 2026-04-13 03:21:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:21:23.140173 | orchestrator | 2026-04-13 03:21:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:21:26.186385 | orchestrator | 2026-04-13 03:21:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:21:26.187781 | orchestrator | 2026-04-13 03:21:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:21:26.187828 | orchestrator | 2026-04-13 03:21:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:21:29.233154 | orchestrator | 2026-04-13 03:21:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:21:29.234776 | orchestrator | 2026-04-13 03:21:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:21:29.234817 | orchestrator | 2026-04-13 03:21:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:21:32.282555 | orchestrator | 2026-04-13 03:21:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:21:32.284575 | orchestrator | 2026-04-13 03:21:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:21:32.284735 | orchestrator | 2026-04-13 03:21:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:21:35.336182 | orchestrator | 2026-04-13 03:21:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:21:35.339310 | orchestrator | 2026-04-13 03:21:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:21:35.339359 | orchestrator | 2026-04-13 03:21:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:21:38.389586 | orchestrator | 2026-04-13 03:21:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:21:38.390273 | orchestrator | 2026-04-13 03:21:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:21:38.390318 | orchestrator | 2026-04-13 03:21:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:21:41.444185 | orchestrator | 2026-04-13 03:21:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:21:41.445646 | orchestrator | 2026-04-13 03:21:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:21:41.445712 | orchestrator | 2026-04-13 03:21:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:21:44.488739 | orchestrator | 2026-04-13 03:21:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:21:44.490876 | orchestrator | 2026-04-13 03:21:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:21:44.490971 | orchestrator | 2026-04-13 03:21:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:21:47.536031 | orchestrator | 2026-04-13 03:21:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:21:47.537280 | orchestrator | 2026-04-13 03:21:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:21:47.537640 | orchestrator | 2026-04-13 03:21:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:21:50.592004 | orchestrator | 2026-04-13 03:21:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:21:50.593474 | orchestrator | 2026-04-13 03:21:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:21:50.593499 | orchestrator | 2026-04-13 03:21:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:21:53.651376 | orchestrator | 2026-04-13 03:21:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:21:53.653447 | orchestrator | 2026-04-13 03:21:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:21:53.653567 | orchestrator | 2026-04-13 03:21:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:21:56.707804 | orchestrator | 2026-04-13 03:21:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:21:56.709288 | orchestrator | 2026-04-13 03:21:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:21:56.709371 | orchestrator | 2026-04-13 03:21:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:21:59.763905 | orchestrator | 2026-04-13 03:21:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:21:59.765284 | orchestrator | 2026-04-13 03:21:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:21:59.765366 | orchestrator | 2026-04-13 03:21:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:22:02.813801 | orchestrator | 2026-04-13 03:22:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:22:02.815484 | orchestrator | 2026-04-13 03:22:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:22:02.815623 | orchestrator | 2026-04-13 03:22:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:22:05.875600 | orchestrator | 2026-04-13 03:22:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:22:05.877908 | orchestrator | 2026-04-13 03:22:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:22:05.878080 | orchestrator | 2026-04-13 03:22:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:22:08.930989 | orchestrator | 2026-04-13 03:22:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:22:08.932776 | orchestrator | 2026-04-13 03:22:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:22:08.932866 | orchestrator | 2026-04-13 03:22:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:22:11.985851 | orchestrator | 2026-04-13 03:22:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:22:11.988527 | orchestrator | 2026-04-13 03:22:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:22:11.988616 | orchestrator | 2026-04-13 03:22:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:22:15.037475 | orchestrator | 2026-04-13 03:22:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:22:15.039077 | orchestrator | 2026-04-13 03:22:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:22:15.039360 | orchestrator | 2026-04-13 03:22:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:22:18.094661 | orchestrator | 2026-04-13 03:22:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:22:18.096590 | orchestrator | 2026-04-13 03:22:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:22:18.096647 | orchestrator | 2026-04-13 03:22:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:22:21.157557 | orchestrator | 2026-04-13 03:22:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:22:21.158464 | orchestrator | 2026-04-13 03:22:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:22:21.158492 | orchestrator | 2026-04-13 03:22:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:22:24.208534 | orchestrator | 2026-04-13 03:22:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:22:24.209621 | orchestrator | 2026-04-13 03:22:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:22:24.209665 | orchestrator | 2026-04-13 03:22:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:22:27.259539 | orchestrator | 2026-04-13 03:22:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:22:27.260335 | orchestrator | 2026-04-13 03:22:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:22:27.260368 | orchestrator | 2026-04-13 03:22:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:22:30.324067 | orchestrator | 2026-04-13 03:22:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:22:30.326744 | orchestrator | 2026-04-13 03:22:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:22:30.326813 | orchestrator | 2026-04-13 03:22:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:22:33.382376 | orchestrator | 2026-04-13 03:22:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:22:33.384513 | orchestrator | 2026-04-13 03:22:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:22:33.384556 | orchestrator | 2026-04-13 03:22:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:22:36.432768 | orchestrator | 2026-04-13 03:22:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:22:36.434857 | orchestrator | 2026-04-13 03:22:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:22:36.435141 | orchestrator | 2026-04-13 03:22:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:22:39.490805 | orchestrator | 2026-04-13 03:22:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:22:39.491842 | orchestrator | 2026-04-13 03:22:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:22:39.491894 | orchestrator | 2026-04-13 03:22:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:22:42.549462 | orchestrator | 2026-04-13 03:22:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:22:42.550861 | orchestrator | 2026-04-13 03:22:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:22:42.550943 | orchestrator | 2026-04-13 03:22:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:22:45.603883 | orchestrator | 2026-04-13 03:22:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:22:45.605017 | orchestrator | 2026-04-13 03:22:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:22:45.605104 | orchestrator | 2026-04-13 03:22:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:22:48.661529 | orchestrator | 2026-04-13 03:22:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:22:48.663491 | orchestrator | 2026-04-13 03:22:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:22:48.663629 | orchestrator | 2026-04-13 03:22:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:22:51.722777 | orchestrator | 2026-04-13 03:22:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:22:51.724318 | orchestrator | 2026-04-13 03:22:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:22:51.724415 | orchestrator | 2026-04-13 03:22:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:22:54.770217 | orchestrator | 2026-04-13 03:22:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:22:54.771295 | orchestrator | 2026-04-13 03:22:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:22:54.771328 | orchestrator | 2026-04-13 03:22:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:22:57.825588 | orchestrator | 2026-04-13 03:22:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:22:57.828399 | orchestrator | 2026-04-13 03:22:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:22:57.828472 | orchestrator | 2026-04-13 03:22:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:23:00.884774 | orchestrator | 2026-04-13 03:23:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:23:00.887449 | orchestrator | 2026-04-13 03:23:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:23:00.887511 | orchestrator | 2026-04-13 03:23:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:23:03.939203 | orchestrator | 2026-04-13 03:23:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:23:03.940478 | orchestrator | 2026-04-13 03:23:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:23:03.940820 | orchestrator | 2026-04-13 03:23:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:23:06.996822 | orchestrator | 2026-04-13 03:23:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:23:06.998326 | orchestrator | 2026-04-13 03:23:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:23:06.998532 | orchestrator | 2026-04-13 03:23:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:23:10.043360 | orchestrator | 2026-04-13 03:23:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:23:10.045193 | orchestrator | 2026-04-13 03:23:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:23:10.045318 | orchestrator | 2026-04-13 03:23:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:23:13.093571 | orchestrator | 2026-04-13 03:23:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:23:13.094915 | orchestrator | 2026-04-13 03:23:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:23:13.094950 | orchestrator | 2026-04-13 03:23:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:23:16.146485 | orchestrator | 2026-04-13 03:23:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:23:16.147530 | orchestrator | 2026-04-13 03:23:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:23:16.147587 | orchestrator | 2026-04-13 03:23:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:23:19.197669 | orchestrator | 2026-04-13 03:23:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:23:19.198494 | orchestrator | 2026-04-13 03:23:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:23:19.198535 | orchestrator | 2026-04-13 03:23:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:23:22.244129 | orchestrator | 2026-04-13 03:23:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:23:22.245300 | orchestrator | 2026-04-13 03:23:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:23:22.245338 | orchestrator | 2026-04-13 03:23:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:23:25.301373 | orchestrator | 2026-04-13 03:23:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:23:25.302212 | orchestrator | 2026-04-13 03:23:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:23:25.302278 | orchestrator | 2026-04-13 03:23:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:23:28.353637 | orchestrator | 2026-04-13 03:23:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:23:28.357112 | orchestrator | 2026-04-13 03:23:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:23:28.357219 | orchestrator | 2026-04-13 03:23:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:23:31.417512 | orchestrator | 2026-04-13 03:23:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:23:31.420421 | orchestrator | 2026-04-13 03:23:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:23:31.421164 | orchestrator | 2026-04-13 03:23:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:23:34.483154 | orchestrator | 2026-04-13 03:23:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:23:34.485496 | orchestrator | 2026-04-13 03:23:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:23:34.485532 | orchestrator | 2026-04-13 03:23:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:23:37.549991 | orchestrator | 2026-04-13 03:23:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:23:37.551085 | orchestrator | 2026-04-13 03:23:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:23:37.551130 | orchestrator | 2026-04-13 03:23:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:23:40.605370 | orchestrator | 2026-04-13 03:23:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:23:40.607021 | orchestrator | 2026-04-13 03:23:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:23:40.607047 | orchestrator | 2026-04-13 03:23:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:23:43.653316 | orchestrator | 2026-04-13 03:23:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:23:43.654636 | orchestrator | 2026-04-13 03:23:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:23:43.654686 | orchestrator | 2026-04-13 03:23:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:23:46.706864 | orchestrator | 2026-04-13 03:23:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:23:46.709585 | orchestrator | 2026-04-13 03:23:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:23:46.709668 | orchestrator | 2026-04-13 03:23:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:23:49.766976 | orchestrator | 2026-04-13 03:23:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:23:49.769985 | orchestrator | 2026-04-13 03:23:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:23:49.770106 | orchestrator | 2026-04-13 03:23:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:23:52.824798 | orchestrator | 2026-04-13 03:23:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:23:52.827657 | orchestrator | 2026-04-13 03:23:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:23:52.827761 | orchestrator | 2026-04-13 03:23:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:23:55.887061 | orchestrator | 2026-04-13 03:23:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:23:55.888660 | orchestrator | 2026-04-13 03:23:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:23:55.888707 | orchestrator | 2026-04-13 03:23:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:23:58.944753 | orchestrator | 2026-04-13 03:23:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:23:58.947269 | orchestrator | 2026-04-13 03:23:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:23:58.947332 | orchestrator | 2026-04-13 03:23:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:24:02.000376 | orchestrator | 2026-04-13 03:24:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:24:02.002618 | orchestrator | 2026-04-13 03:24:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:24:02.002678 | orchestrator | 2026-04-13 03:24:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:24:05.058360 | orchestrator | 2026-04-13 03:24:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:24:05.060697 | orchestrator | 2026-04-13 03:24:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:24:05.060757 | orchestrator | 2026-04-13 03:24:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:26:08.210709 | orchestrator | 2026-04-13 03:26:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:26:08.210828 | orchestrator | 2026-04-13 03:26:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:26:08.210846 | orchestrator | 2026-04-13 03:26:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:26:11.257156 | orchestrator | 2026-04-13 03:26:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:26:11.258007 | orchestrator | 2026-04-13 03:26:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:26:11.258200 | orchestrator | 2026-04-13 03:26:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:26:14.294803 | orchestrator | 2026-04-13 03:26:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:26:14.295995 | orchestrator | 2026-04-13 03:26:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:26:14.296020 | orchestrator | 2026-04-13 03:26:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:26:17.345794 | orchestrator | 2026-04-13 03:26:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:26:17.345846 | orchestrator | 2026-04-13 03:26:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:26:17.345859 | orchestrator | 2026-04-13 03:26:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:26:20.394764 | orchestrator | 2026-04-13 03:26:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:26:20.395849 | orchestrator | 2026-04-13 03:26:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:26:20.395933 | orchestrator | 2026-04-13 03:26:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:26:23.438415 | orchestrator | 2026-04-13 03:26:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:26:23.440851 | orchestrator | 2026-04-13 03:26:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:26:23.440928 | orchestrator | 2026-04-13 03:26:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:26:26.496767 | orchestrator | 2026-04-13 03:26:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:26:26.497653 | orchestrator | 2026-04-13 03:26:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:26:26.497710 | orchestrator | 2026-04-13 03:26:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:26:29.551106 | orchestrator | 2026-04-13 03:26:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:26:29.552564 | orchestrator | 2026-04-13 03:26:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:26:29.552622 | orchestrator | 2026-04-13 03:26:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:26:32.593692 | orchestrator | 2026-04-13 03:26:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:26:32.596992 | orchestrator | 2026-04-13 03:26:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:26:32.597052 | orchestrator | 2026-04-13 03:26:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:26:35.647201 | orchestrator | 2026-04-13 03:26:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:26:35.647990 | orchestrator | 2026-04-13 03:26:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:26:35.648029 | orchestrator | 2026-04-13 03:26:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:26:38.702432 | orchestrator | 2026-04-13 03:26:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:26:38.707070 | orchestrator | 2026-04-13 03:26:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:26:38.707158 | orchestrator | 2026-04-13 03:26:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:26:41.757552 | orchestrator | 2026-04-13 03:26:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:26:41.760714 | orchestrator | 2026-04-13 03:26:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:26:41.762069 | orchestrator | 2026-04-13 03:26:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:26:44.811917 | orchestrator | 2026-04-13 03:26:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:26:44.814586 | orchestrator | 2026-04-13 03:26:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:26:44.814665 | orchestrator | 2026-04-13 03:26:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:26:47.863340 | orchestrator | 2026-04-13 03:26:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:26:47.864553 | orchestrator | 2026-04-13 03:26:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:26:47.864633 | orchestrator | 2026-04-13 03:26:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:26:50.917728 | orchestrator | 2026-04-13 03:26:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:26:50.921972 | orchestrator | 2026-04-13 03:26:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:26:50.922371 | orchestrator | 2026-04-13 03:26:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:26:53.976934 | orchestrator | 2026-04-13 03:26:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:26:53.979185 | orchestrator | 2026-04-13 03:26:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:26:53.979546 | orchestrator | 2026-04-13 03:26:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:26:57.029038 | orchestrator | 2026-04-13 03:26:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:26:57.031395 | orchestrator | 2026-04-13 03:26:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:26:57.031459 | orchestrator | 2026-04-13 03:26:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:27:00.082409 | orchestrator | 2026-04-13 03:27:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:27:00.083458 | orchestrator | 2026-04-13 03:27:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:27:00.083493 | orchestrator | 2026-04-13 03:27:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:27:03.134587 | orchestrator | 2026-04-13 03:27:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:27:03.136021 | orchestrator | 2026-04-13 03:27:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:27:03.136066 | orchestrator | 2026-04-13 03:27:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:27:06.185729 | orchestrator | 2026-04-13 03:27:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:27:06.187728 | orchestrator | 2026-04-13 03:27:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:27:06.187795 | orchestrator | 2026-04-13 03:27:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:27:09.236312 | orchestrator | 2026-04-13 03:27:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:27:09.237401 | orchestrator | 2026-04-13 03:27:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:27:09.237471 | orchestrator | 2026-04-13 03:27:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:27:12.290960 | orchestrator | 2026-04-13 03:27:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:27:12.293359 | orchestrator | 2026-04-13 03:27:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:27:12.293403 | orchestrator | 2026-04-13 03:27:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:27:15.352580 | orchestrator | 2026-04-13 03:27:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:27:15.354012 | orchestrator | 2026-04-13 03:27:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:27:15.354134 | orchestrator | 2026-04-13 03:27:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:27:18.404230 | orchestrator | 2026-04-13 03:27:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:27:18.405736 | orchestrator | 2026-04-13 03:27:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:27:18.405814 | orchestrator | 2026-04-13 03:27:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:27:21.460832 | orchestrator | 2026-04-13 03:27:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:27:21.462464 | orchestrator | 2026-04-13 03:27:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:27:21.462503 | orchestrator | 2026-04-13 03:27:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:27:24.521757 | orchestrator | 2026-04-13 03:27:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:27:24.522990 | orchestrator | 2026-04-13 03:27:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:27:24.523052 | orchestrator | 2026-04-13 03:27:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:27:27.572046 | orchestrator | 2026-04-13 03:27:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:27:27.575008 | orchestrator | 2026-04-13 03:27:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:27:27.575084 | orchestrator | 2026-04-13 03:27:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:27:30.626536 | orchestrator | 2026-04-13 03:27:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:27:30.628808 | orchestrator | 2026-04-13 03:27:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:27:30.628947 | orchestrator | 2026-04-13 03:27:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:27:33.686879 | orchestrator | 2026-04-13 03:27:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:27:33.689769 | orchestrator | 2026-04-13 03:27:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:27:33.690182 | orchestrator | 2026-04-13 03:27:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:27:36.738609 | orchestrator | 2026-04-13 03:27:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:27:36.738867 | orchestrator | 2026-04-13 03:27:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:27:36.738895 | orchestrator | 2026-04-13 03:27:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:27:39.786760 | orchestrator | 2026-04-13 03:27:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:27:39.789176 | orchestrator | 2026-04-13 03:27:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:27:39.789236 | orchestrator | 2026-04-13 03:27:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:27:42.837088 | orchestrator | 2026-04-13 03:27:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:27:42.837393 | orchestrator | 2026-04-13 03:27:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:27:42.837424 | orchestrator | 2026-04-13 03:27:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:27:45.896729 | orchestrator | 2026-04-13 03:27:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:27:45.898415 | orchestrator | 2026-04-13 03:27:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:27:45.898474 | orchestrator | 2026-04-13 03:27:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:27:48.946999 | orchestrator | 2026-04-13 03:27:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:27:48.949175 | orchestrator | 2026-04-13 03:27:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:27:48.949434 | orchestrator | 2026-04-13 03:27:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:27:51.994954 | orchestrator | 2026-04-13 03:27:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:27:51.997606 | orchestrator | 2026-04-13 03:27:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:27:51.997670 | orchestrator | 2026-04-13 03:27:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:27:55.048574 | orchestrator | 2026-04-13 03:27:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:27:55.050561 | orchestrator | 2026-04-13 03:27:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:27:55.050648 | orchestrator | 2026-04-13 03:27:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:27:58.096330 | orchestrator | 2026-04-13 03:27:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:27:58.097541 | orchestrator | 2026-04-13 03:27:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:27:58.097600 | orchestrator | 2026-04-13 03:27:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:28:01.147507 | orchestrator | 2026-04-13 03:28:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:28:01.148968 | orchestrator | 2026-04-13 03:28:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:28:01.149078 | orchestrator | 2026-04-13 03:28:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:28:04.196788 | orchestrator | 2026-04-13 03:28:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:28:04.198777 | orchestrator | 2026-04-13 03:28:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:28:04.198860 | orchestrator | 2026-04-13 03:28:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:28:07.247663 | orchestrator | 2026-04-13 03:28:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:28:07.252134 | orchestrator | 2026-04-13 03:28:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:28:07.252313 | orchestrator | 2026-04-13 03:28:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:28:10.304737 | orchestrator | 2026-04-13 03:28:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:28:10.307194 | orchestrator | 2026-04-13 03:28:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:28:10.307329 | orchestrator | 2026-04-13 03:28:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:28:13.353606 | orchestrator | 2026-04-13 03:28:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:28:13.354340 | orchestrator | 2026-04-13 03:28:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:28:13.354380 | orchestrator | 2026-04-13 03:28:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:28:16.399025 | orchestrator | 2026-04-13 03:28:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:28:16.402614 | orchestrator | 2026-04-13 03:28:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:28:16.402676 | orchestrator | 2026-04-13 03:28:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:28:19.445576 | orchestrator | 2026-04-13 03:28:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:28:19.448001 | orchestrator | 2026-04-13 03:28:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:28:19.448069 | orchestrator | 2026-04-13 03:28:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:28:22.493356 | orchestrator | 2026-04-13 03:28:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:28:22.494083 | orchestrator | 2026-04-13 03:28:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:28:22.494125 | orchestrator | 2026-04-13 03:28:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:28:25.550389 | orchestrator | 2026-04-13 03:28:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:28:25.551992 | orchestrator | 2026-04-13 03:28:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:28:25.552037 | orchestrator | 2026-04-13 03:28:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:28:28.603367 | orchestrator | 2026-04-13 03:28:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:28:28.605132 | orchestrator | 2026-04-13 03:28:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:28:28.605224 | orchestrator | 2026-04-13 03:28:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:28:31.665175 | orchestrator | 2026-04-13 03:28:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:28:31.666440 | orchestrator | 2026-04-13 03:28:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:28:31.666533 | orchestrator | 2026-04-13 03:28:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:28:34.714817 | orchestrator | 2026-04-13 03:28:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:28:34.715837 | orchestrator | 2026-04-13 03:28:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:28:34.715882 | orchestrator | 2026-04-13 03:28:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:28:37.759408 | orchestrator | 2026-04-13 03:28:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:28:37.762681 | orchestrator | 2026-04-13 03:28:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:28:37.762741 | orchestrator | 2026-04-13 03:28:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:28:40.804917 | orchestrator | 2026-04-13 03:28:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:28:40.805900 | orchestrator | 2026-04-13 03:28:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:28:40.805957 | orchestrator | 2026-04-13 03:28:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:28:43.852684 | orchestrator | 2026-04-13 03:28:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:28:43.854285 | orchestrator | 2026-04-13 03:28:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:28:43.854375 | orchestrator | 2026-04-13 03:28:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:28:46.903576 | orchestrator | 2026-04-13 03:28:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:28:46.904660 | orchestrator | 2026-04-13 03:28:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:28:46.904743 | orchestrator | 2026-04-13 03:28:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:28:49.953146 | orchestrator | 2026-04-13 03:28:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:28:49.955195 | orchestrator | 2026-04-13 03:28:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:28:49.955246 | orchestrator | 2026-04-13 03:28:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:28:53.008590 | orchestrator | 2026-04-13 03:28:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:28:53.010470 | orchestrator | 2026-04-13 03:28:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:28:53.010539 | orchestrator | 2026-04-13 03:28:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:28:56.065187 | orchestrator | 2026-04-13 03:28:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:28:56.067484 | orchestrator | 2026-04-13 03:28:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:28:56.067539 | orchestrator | 2026-04-13 03:28:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:28:59.120521 | orchestrator | 2026-04-13 03:28:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:28:59.122575 | orchestrator | 2026-04-13 03:28:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:28:59.122735 | orchestrator | 2026-04-13 03:28:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:29:02.173269 | orchestrator | 2026-04-13 03:29:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:29:02.175623 | orchestrator | 2026-04-13 03:29:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:29:02.175651 | orchestrator | 2026-04-13 03:29:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:29:05.226753 | orchestrator | 2026-04-13 03:29:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:29:05.229032 | orchestrator | 2026-04-13 03:29:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:29:05.229082 | orchestrator | 2026-04-13 03:29:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:29:08.279447 | orchestrator | 2026-04-13 03:29:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:29:08.280355 | orchestrator | 2026-04-13 03:29:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:29:08.280462 | orchestrator | 2026-04-13 03:29:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:29:11.328774 | orchestrator | 2026-04-13 03:29:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:29:11.330503 | orchestrator | 2026-04-13 03:29:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:29:11.330716 | orchestrator | 2026-04-13 03:29:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:29:14.381367 | orchestrator | 2026-04-13 03:29:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:29:14.384068 | orchestrator | 2026-04-13 03:29:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:29:14.384118 | orchestrator | 2026-04-13 03:29:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:29:17.435947 | orchestrator | 2026-04-13 03:29:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:29:17.438235 | orchestrator | 2026-04-13 03:29:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:29:17.438441 | orchestrator | 2026-04-13 03:29:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:29:20.485736 | orchestrator | 2026-04-13 03:29:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:29:20.488152 | orchestrator | 2026-04-13 03:29:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:29:20.488214 | orchestrator | 2026-04-13 03:29:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:29:23.535032 | orchestrator | 2026-04-13 03:29:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:29:23.536647 | orchestrator | 2026-04-13 03:29:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:29:23.536754 | orchestrator | 2026-04-13 03:29:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:29:26.582111 | orchestrator | 2026-04-13 03:29:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:29:26.583189 | orchestrator | 2026-04-13 03:29:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:29:26.583291 | orchestrator | 2026-04-13 03:29:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:29:29.632534 | orchestrator | 2026-04-13 03:29:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:29:29.632963 | orchestrator | 2026-04-13 03:29:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:29:29.633414 | orchestrator | 2026-04-13 03:29:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:29:32.680723 | orchestrator | 2026-04-13 03:29:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:29:32.683240 | orchestrator | 2026-04-13 03:29:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:29:32.683384 | orchestrator | 2026-04-13 03:29:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:29:35.735649 | orchestrator | 2026-04-13 03:29:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:29:35.737193 | orchestrator | 2026-04-13 03:29:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:29:35.737228 | orchestrator | 2026-04-13 03:29:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:29:38.790737 | orchestrator | 2026-04-13 03:29:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:29:38.791460 | orchestrator | 2026-04-13 03:29:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:29:38.791495 | orchestrator | 2026-04-13 03:29:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:29:41.835926 | orchestrator | 2026-04-13 03:29:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:29:41.838234 | orchestrator | 2026-04-13 03:29:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:29:41.838279 | orchestrator | 2026-04-13 03:29:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:29:44.887062 | orchestrator | 2026-04-13 03:29:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:29:44.889887 | orchestrator | 2026-04-13 03:29:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:29:44.890119 | orchestrator | 2026-04-13 03:29:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:29:47.937787 | orchestrator | 2026-04-13 03:29:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:29:47.940071 | orchestrator | 2026-04-13 03:29:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:29:47.940237 | orchestrator | 2026-04-13 03:29:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:29:50.995049 | orchestrator | 2026-04-13 03:29:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:29:50.996132 | orchestrator | 2026-04-13 03:29:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:29:50.996172 | orchestrator | 2026-04-13 03:29:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:29:54.043766 | orchestrator | 2026-04-13 03:29:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:29:54.045708 | orchestrator | 2026-04-13 03:29:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:29:54.045765 | orchestrator | 2026-04-13 03:29:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:29:57.093954 | orchestrator | 2026-04-13 03:29:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:29:57.097126 | orchestrator | 2026-04-13 03:29:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:29:57.097188 | orchestrator | 2026-04-13 03:29:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:30:00.143696 | orchestrator | 2026-04-13 03:30:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:30:00.144758 | orchestrator | 2026-04-13 03:30:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:30:00.144799 | orchestrator | 2026-04-13 03:30:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:30:03.195436 | orchestrator | 2026-04-13 03:30:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:30:03.197482 | orchestrator | 2026-04-13 03:30:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:30:03.197520 | orchestrator | 2026-04-13 03:30:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:30:06.247062 | orchestrator | 2026-04-13 03:30:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:30:06.249134 | orchestrator | 2026-04-13 03:30:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:30:06.249240 | orchestrator | 2026-04-13 03:30:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:30:09.293372 | orchestrator | 2026-04-13 03:30:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:30:09.296643 | orchestrator | 2026-04-13 03:30:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:30:09.296711 | orchestrator | 2026-04-13 03:30:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:30:12.345740 | orchestrator | 2026-04-13 03:30:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:30:12.348860 | orchestrator | 2026-04-13 03:30:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:30:12.349323 | orchestrator | 2026-04-13 03:30:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:30:15.412665 | orchestrator | 2026-04-13 03:30:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:30:15.414597 | orchestrator | 2026-04-13 03:30:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:30:15.414662 | orchestrator | 2026-04-13 03:30:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:30:18.453938 | orchestrator | 2026-04-13 03:30:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:30:18.455256 | orchestrator | 2026-04-13 03:30:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:30:18.455454 | orchestrator | 2026-04-13 03:30:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:30:21.503998 | orchestrator | 2026-04-13 03:30:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:30:21.506532 | orchestrator | 2026-04-13 03:30:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:30:21.506591 | orchestrator | 2026-04-13 03:30:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:30:24.556166 | orchestrator | 2026-04-13 03:30:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:30:24.558328 | orchestrator | 2026-04-13 03:30:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:30:24.558434 | orchestrator | 2026-04-13 03:30:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:30:27.605620 | orchestrator | 2026-04-13 03:30:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:30:27.607322 | orchestrator | 2026-04-13 03:30:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:30:27.607496 | orchestrator | 2026-04-13 03:30:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:30:30.660389 | orchestrator | 2026-04-13 03:30:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:30:30.662948 | orchestrator | 2026-04-13 03:30:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:30:30.662981 | orchestrator | 2026-04-13 03:30:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:30:33.712315 | orchestrator | 2026-04-13 03:30:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:30:33.713772 | orchestrator | 2026-04-13 03:30:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:30:33.713819 | orchestrator | 2026-04-13 03:30:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:30:36.769504 | orchestrator | 2026-04-13 03:30:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:30:36.773474 | orchestrator | 2026-04-13 03:30:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:30:36.773648 | orchestrator | 2026-04-13 03:30:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:30:39.824319 | orchestrator | 2026-04-13 03:30:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:30:39.825784 | orchestrator | 2026-04-13 03:30:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:30:39.825844 | orchestrator | 2026-04-13 03:30:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:30:42.877044 | orchestrator | 2026-04-13 03:30:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:30:42.879101 | orchestrator | 2026-04-13 03:30:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:30:42.879135 | orchestrator | 2026-04-13 03:30:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:30:45.926577 | orchestrator | 2026-04-13 03:30:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:30:45.927860 | orchestrator | 2026-04-13 03:30:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:30:45.927886 | orchestrator | 2026-04-13 03:30:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:30:48.985999 | orchestrator | 2026-04-13 03:30:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:30:48.987973 | orchestrator | 2026-04-13 03:30:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:30:48.988061 | orchestrator | 2026-04-13 03:30:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:30:52.039272 | orchestrator | 2026-04-13 03:30:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:30:52.041314 | orchestrator | 2026-04-13 03:30:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:30:52.041387 | orchestrator | 2026-04-13 03:30:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:30:55.095755 | orchestrator | 2026-04-13 03:30:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:30:55.097278 | orchestrator | 2026-04-13 03:30:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:30:55.097344 | orchestrator | 2026-04-13 03:30:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:30:58.155993 | orchestrator | 2026-04-13 03:30:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:30:58.157871 | orchestrator | 2026-04-13 03:30:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:30:58.157924 | orchestrator | 2026-04-13 03:30:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:31:01.208048 | orchestrator | 2026-04-13 03:31:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:31:01.210602 | orchestrator | 2026-04-13 03:31:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:31:01.210666 | orchestrator | 2026-04-13 03:31:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:31:04.260010 | orchestrator | 2026-04-13 03:31:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:31:04.261979 | orchestrator | 2026-04-13 03:31:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:31:04.262259 | orchestrator | 2026-04-13 03:31:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:31:07.315952 | orchestrator | 2026-04-13 03:31:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:31:07.317495 | orchestrator | 2026-04-13 03:31:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:31:07.317545 | orchestrator | 2026-04-13 03:31:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:31:10.371084 | orchestrator | 2026-04-13 03:31:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:31:10.371491 | orchestrator | 2026-04-13 03:31:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:31:10.371520 | orchestrator | 2026-04-13 03:31:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:31:13.424871 | orchestrator | 2026-04-13 03:31:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:31:13.428758 | orchestrator | 2026-04-13 03:31:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:31:13.428844 | orchestrator | 2026-04-13 03:31:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:31:16.478652 | orchestrator | 2026-04-13 03:31:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:31:16.479982 | orchestrator | 2026-04-13 03:31:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:31:16.480066 | orchestrator | 2026-04-13 03:31:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:31:19.528160 | orchestrator | 2026-04-13 03:31:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:31:19.528618 | orchestrator | 2026-04-13 03:31:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:31:19.528653 | orchestrator | 2026-04-13 03:31:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:31:22.575231 | orchestrator | 2026-04-13 03:31:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:31:22.576946 | orchestrator | 2026-04-13 03:31:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:31:22.577004 | orchestrator | 2026-04-13 03:31:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:31:25.621436 | orchestrator | 2026-04-13 03:31:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:31:25.621788 | orchestrator | 2026-04-13 03:31:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:31:25.621817 | orchestrator | 2026-04-13 03:31:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:31:28.672750 | orchestrator | 2026-04-13 03:31:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:31:28.673702 | orchestrator | 2026-04-13 03:31:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:31:28.673722 | orchestrator | 2026-04-13 03:31:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:31:31.718511 | orchestrator | 2026-04-13 03:31:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:31:31.718788 | orchestrator | 2026-04-13 03:31:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:31:31.718813 | orchestrator | 2026-04-13 03:31:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:31:34.770329 | orchestrator | 2026-04-13 03:31:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:31:34.772631 | orchestrator | 2026-04-13 03:31:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:31:34.772759 | orchestrator | 2026-04-13 03:31:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:31:37.829164 | orchestrator | 2026-04-13 03:31:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:31:37.830486 | orchestrator | 2026-04-13 03:31:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:31:37.830542 | orchestrator | 2026-04-13 03:31:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:31:40.885942 | orchestrator | 2026-04-13 03:31:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:31:40.888366 | orchestrator | 2026-04-13 03:31:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:31:40.888517 | orchestrator | 2026-04-13 03:31:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:31:43.941919 | orchestrator | 2026-04-13 03:31:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:31:43.943230 | orchestrator | 2026-04-13 03:31:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:31:43.943267 | orchestrator | 2026-04-13 03:31:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:31:46.992101 | orchestrator | 2026-04-13 03:31:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:31:46.995230 | orchestrator | 2026-04-13 03:31:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:31:46.995335 | orchestrator | 2026-04-13 03:31:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:31:50.044938 | orchestrator | 2026-04-13 03:31:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:31:50.047563 | orchestrator | 2026-04-13 03:31:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:31:50.047628 | orchestrator | 2026-04-13 03:31:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:31:53.100993 | orchestrator | 2026-04-13 03:31:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:31:53.102652 | orchestrator | 2026-04-13 03:31:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:31:53.102703 | orchestrator | 2026-04-13 03:31:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:31:56.158583 | orchestrator | 2026-04-13 03:31:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:31:56.161056 | orchestrator | 2026-04-13 03:31:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:31:56.161201 | orchestrator | 2026-04-13 03:31:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:31:59.216824 | orchestrator | 2026-04-13 03:31:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:31:59.218784 | orchestrator | 2026-04-13 03:31:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:31:59.218889 | orchestrator | 2026-04-13 03:31:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:32:02.270595 | orchestrator | 2026-04-13 03:32:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:32:02.272530 | orchestrator | 2026-04-13 03:32:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:32:02.272576 | orchestrator | 2026-04-13 03:32:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:32:05.326262 | orchestrator | 2026-04-13 03:32:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:32:05.327683 | orchestrator | 2026-04-13 03:32:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:32:05.327714 | orchestrator | 2026-04-13 03:32:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:32:08.381578 | orchestrator | 2026-04-13 03:32:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:32:08.384673 | orchestrator | 2026-04-13 03:32:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:32:08.384726 | orchestrator | 2026-04-13 03:32:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:32:11.436154 | orchestrator | 2026-04-13 03:32:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:32:11.438508 | orchestrator | 2026-04-13 03:32:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:32:11.438565 | orchestrator | 2026-04-13 03:32:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:32:14.489854 | orchestrator | 2026-04-13 03:32:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:32:14.491309 | orchestrator | 2026-04-13 03:32:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:32:14.491369 | orchestrator | 2026-04-13 03:32:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:32:17.534575 | orchestrator | 2026-04-13 03:32:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:32:17.537204 | orchestrator | 2026-04-13 03:32:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:32:17.537458 | orchestrator | 2026-04-13 03:32:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:32:20.590492 | orchestrator | 2026-04-13 03:32:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:32:20.592539 | orchestrator | 2026-04-13 03:32:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:32:20.593093 | orchestrator | 2026-04-13 03:32:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:32:23.642995 | orchestrator | 2026-04-13 03:32:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:32:23.644459 | orchestrator | 2026-04-13 03:32:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:32:23.644514 | orchestrator | 2026-04-13 03:32:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:32:26.697778 | orchestrator | 2026-04-13 03:32:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:32:26.700716 | orchestrator | 2026-04-13 03:32:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:32:26.700798 | orchestrator | 2026-04-13 03:32:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:32:29.751535 | orchestrator | 2026-04-13 03:32:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:32:29.754134 | orchestrator | 2026-04-13 03:32:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:32:29.754206 | orchestrator | 2026-04-13 03:32:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:32:32.802995 | orchestrator | 2026-04-13 03:32:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:32:32.804663 | orchestrator | 2026-04-13 03:32:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:32:32.804719 | orchestrator | 2026-04-13 03:32:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:32:35.852745 | orchestrator | 2026-04-13 03:32:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:32:35.854530 | orchestrator | 2026-04-13 03:32:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:32:35.854592 | orchestrator | 2026-04-13 03:32:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:32:38.909478 | orchestrator | 2026-04-13 03:32:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:32:38.912457 | orchestrator | 2026-04-13 03:32:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:32:38.912737 | orchestrator | 2026-04-13 03:32:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:32:41.971956 | orchestrator | 2026-04-13 03:32:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:32:41.975303 | orchestrator | 2026-04-13 03:32:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:32:41.975459 | orchestrator | 2026-04-13 03:32:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:32:45.029985 | orchestrator | 2026-04-13 03:32:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:32:45.031693 | orchestrator | 2026-04-13 03:32:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:32:45.031748 | orchestrator | 2026-04-13 03:32:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:32:48.083438 | orchestrator | 2026-04-13 03:32:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:32:48.084853 | orchestrator | 2026-04-13 03:32:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:32:48.084902 | orchestrator | 2026-04-13 03:32:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:32:51.142980 | orchestrator | 2026-04-13 03:32:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:32:51.144959 | orchestrator | 2026-04-13 03:32:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:32:51.145571 | orchestrator | 2026-04-13 03:32:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:32:54.205395 | orchestrator | 2026-04-13 03:32:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:32:54.206499 | orchestrator | 2026-04-13 03:32:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:32:54.206534 | orchestrator | 2026-04-13 03:32:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:32:57.258112 | orchestrator | 2026-04-13 03:32:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:32:57.259843 | orchestrator | 2026-04-13 03:32:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:32:57.259882 | orchestrator | 2026-04-13 03:32:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:33:00.314970 | orchestrator | 2026-04-13 03:33:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:33:00.317517 | orchestrator | 2026-04-13 03:33:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:33:00.317577 | orchestrator | 2026-04-13 03:33:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:33:03.373562 | orchestrator | 2026-04-13 03:33:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:33:03.375670 | orchestrator | 2026-04-13 03:33:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:33:03.375754 | orchestrator | 2026-04-13 03:33:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:33:06.422239 | orchestrator | 2026-04-13 03:33:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:33:06.424711 | orchestrator | 2026-04-13 03:33:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:33:06.424773 | orchestrator | 2026-04-13 03:33:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:33:09.469464 | orchestrator | 2026-04-13 03:33:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:33:09.470527 | orchestrator | 2026-04-13 03:33:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:33:09.470562 | orchestrator | 2026-04-13 03:33:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:33:12.522528 | orchestrator | 2026-04-13 03:33:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:33:12.524632 | orchestrator | 2026-04-13 03:33:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:33:12.524699 | orchestrator | 2026-04-13 03:33:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:33:15.572712 | orchestrator | 2026-04-13 03:33:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:33:15.574871 | orchestrator | 2026-04-13 03:33:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:33:15.574965 | orchestrator | 2026-04-13 03:33:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:33:18.616900 | orchestrator | 2026-04-13 03:33:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:33:18.617711 | orchestrator | 2026-04-13 03:33:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:33:18.617730 | orchestrator | 2026-04-13 03:33:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:33:21.667340 | orchestrator | 2026-04-13 03:33:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:33:21.668594 | orchestrator | 2026-04-13 03:33:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:33:21.668648 | orchestrator | 2026-04-13 03:33:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:33:24.730304 | orchestrator | 2026-04-13 03:33:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:33:24.731965 | orchestrator | 2026-04-13 03:33:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:33:24.732027 | orchestrator | 2026-04-13 03:33:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:33:27.798169 | orchestrator | 2026-04-13 03:33:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:33:27.799480 | orchestrator | 2026-04-13 03:33:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:33:27.799549 | orchestrator | 2026-04-13 03:33:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:33:30.858741 | orchestrator | 2026-04-13 03:33:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:33:30.858833 | orchestrator | 2026-04-13 03:33:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:33:30.858843 | orchestrator | 2026-04-13 03:33:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:33:33.909820 | orchestrator | 2026-04-13 03:33:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:33:33.912059 | orchestrator | 2026-04-13 03:33:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:33:33.912258 | orchestrator | 2026-04-13 03:33:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:33:36.975663 | orchestrator | 2026-04-13 03:33:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:33:36.977720 | orchestrator | 2026-04-13 03:33:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:33:36.977747 | orchestrator | 2026-04-13 03:33:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:33:40.036861 | orchestrator | 2026-04-13 03:33:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:33:40.041042 | orchestrator | 2026-04-13 03:33:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:33:40.041121 | orchestrator | 2026-04-13 03:33:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:33:43.092058 | orchestrator | 2026-04-13 03:33:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:33:43.096608 | orchestrator | 2026-04-13 03:33:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:33:43.096711 | orchestrator | 2026-04-13 03:33:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:33:46.147356 | orchestrator | 2026-04-13 03:33:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:33:46.150845 | orchestrator | 2026-04-13 03:33:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:33:46.150910 | orchestrator | 2026-04-13 03:33:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:33:49.204780 | orchestrator | 2026-04-13 03:33:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:33:49.207234 | orchestrator | 2026-04-13 03:33:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:33:49.207289 | orchestrator | 2026-04-13 03:33:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:33:52.265775 | orchestrator | 2026-04-13 03:33:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:33:52.266228 | orchestrator | 2026-04-13 03:33:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:33:52.266395 | orchestrator | 2026-04-13 03:33:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:33:55.316824 | orchestrator | 2026-04-13 03:33:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:33:55.317632 | orchestrator | 2026-04-13 03:33:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:33:55.317670 | orchestrator | 2026-04-13 03:33:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:33:58.365029 | orchestrator | 2026-04-13 03:33:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:33:58.366190 | orchestrator | 2026-04-13 03:33:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:33:58.366355 | orchestrator | 2026-04-13 03:33:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:34:01.422956 | orchestrator | 2026-04-13 03:34:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:34:01.425318 | orchestrator | 2026-04-13 03:34:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:34:01.425409 | orchestrator | 2026-04-13 03:34:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:34:04.476923 | orchestrator | 2026-04-13 03:34:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:34:04.480174 | orchestrator | 2026-04-13 03:34:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:34:04.480254 | orchestrator | 2026-04-13 03:34:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:34:07.534992 | orchestrator | 2026-04-13 03:34:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:34:07.537157 | orchestrator | 2026-04-13 03:34:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:34:07.537292 | orchestrator | 2026-04-13 03:34:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:34:10.579818 | orchestrator | 2026-04-13 03:34:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:34:10.581634 | orchestrator | 2026-04-13 03:34:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:34:10.581683 | orchestrator | 2026-04-13 03:34:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:34:13.630420 | orchestrator | 2026-04-13 03:34:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:34:13.632129 | orchestrator | 2026-04-13 03:34:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:34:13.632598 | orchestrator | 2026-04-13 03:34:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:34:16.684049 | orchestrator | 2026-04-13 03:34:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:34:16.687038 | orchestrator | 2026-04-13 03:34:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:34:16.687098 | orchestrator | 2026-04-13 03:34:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:34:19.733927 | orchestrator | 2026-04-13 03:34:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:34:19.734872 | orchestrator | 2026-04-13 03:34:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:34:19.734910 | orchestrator | 2026-04-13 03:34:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:34:22.778978 | orchestrator | 2026-04-13 03:34:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:34:22.781422 | orchestrator | 2026-04-13 03:34:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:34:22.781554 | orchestrator | 2026-04-13 03:34:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:34:25.829201 | orchestrator | 2026-04-13 03:34:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:34:25.831791 | orchestrator | 2026-04-13 03:34:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:34:25.831979 | orchestrator | 2026-04-13 03:34:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:34:28.876079 | orchestrator | 2026-04-13 03:34:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:34:28.877318 | orchestrator | 2026-04-13 03:34:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:34:28.877339 | orchestrator | 2026-04-13 03:34:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:34:31.926893 | orchestrator | 2026-04-13 03:34:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:34:31.929658 | orchestrator | 2026-04-13 03:34:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:34:31.929700 | orchestrator | 2026-04-13 03:34:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:34:34.978630 | orchestrator | 2026-04-13 03:34:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:34:34.980201 | orchestrator | 2026-04-13 03:34:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:34:34.980279 | orchestrator | 2026-04-13 03:34:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:34:38.037315 | orchestrator | 2026-04-13 03:34:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:34:38.039834 | orchestrator | 2026-04-13 03:34:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:34:38.040029 | orchestrator | 2026-04-13 03:34:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:34:41.093673 | orchestrator | 2026-04-13 03:34:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:34:41.094756 | orchestrator | 2026-04-13 03:34:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:34:41.095114 | orchestrator | 2026-04-13 03:34:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:34:44.151508 | orchestrator | 2026-04-13 03:34:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:34:44.153278 | orchestrator | 2026-04-13 03:34:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:34:44.153427 | orchestrator | 2026-04-13 03:34:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:34:47.206406 | orchestrator | 2026-04-13 03:34:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:34:47.207234 | orchestrator | 2026-04-13 03:34:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:34:47.207265 | orchestrator | 2026-04-13 03:34:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:34:50.263342 | orchestrator | 2026-04-13 03:34:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:34:50.268707 | orchestrator | 2026-04-13 03:34:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:34:50.268827 | orchestrator | 2026-04-13 03:34:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:34:53.315909 | orchestrator | 2026-04-13 03:34:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:34:53.319162 | orchestrator | 2026-04-13 03:34:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:34:53.319289 | orchestrator | 2026-04-13 03:34:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:34:56.368993 | orchestrator | 2026-04-13 03:34:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:34:56.370567 | orchestrator | 2026-04-13 03:34:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:34:56.370617 | orchestrator | 2026-04-13 03:34:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:34:59.422228 | orchestrator | 2026-04-13 03:34:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:34:59.424341 | orchestrator | 2026-04-13 03:34:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:34:59.424400 | orchestrator | 2026-04-13 03:34:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:35:02.480679 | orchestrator | 2026-04-13 03:35:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:35:02.481644 | orchestrator | 2026-04-13 03:35:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:35:02.481673 | orchestrator | 2026-04-13 03:35:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:35:05.528541 | orchestrator | 2026-04-13 03:35:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:35:05.530405 | orchestrator | 2026-04-13 03:35:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:35:05.530519 | orchestrator | 2026-04-13 03:35:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:35:08.582731 | orchestrator | 2026-04-13 03:35:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:35:08.583272 | orchestrator | 2026-04-13 03:35:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:35:08.583323 | orchestrator | 2026-04-13 03:35:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:35:11.640393 | orchestrator | 2026-04-13 03:35:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:35:11.641526 | orchestrator | 2026-04-13 03:35:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:35:11.641563 | orchestrator | 2026-04-13 03:35:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:35:14.701098 | orchestrator | 2026-04-13 03:35:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:35:14.702563 | orchestrator | 2026-04-13 03:35:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:35:14.702595 | orchestrator | 2026-04-13 03:35:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:35:17.757154 | orchestrator | 2026-04-13 03:35:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:35:17.760787 | orchestrator | 2026-04-13 03:35:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:35:17.760869 | orchestrator | 2026-04-13 03:35:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:35:20.810983 | orchestrator | 2026-04-13 03:35:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:35:20.812764 | orchestrator | 2026-04-13 03:35:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:35:20.812815 | orchestrator | 2026-04-13 03:35:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:35:23.863963 | orchestrator | 2026-04-13 03:35:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:35:23.864890 | orchestrator | 2026-04-13 03:35:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:35:23.864944 | orchestrator | 2026-04-13 03:35:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:35:26.919624 | orchestrator | 2026-04-13 03:35:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:35:26.922516 | orchestrator | 2026-04-13 03:35:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:35:26.922571 | orchestrator | 2026-04-13 03:35:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:35:29.983639 | orchestrator | 2026-04-13 03:35:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:35:29.985051 | orchestrator | 2026-04-13 03:35:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:35:29.985093 | orchestrator | 2026-04-13 03:35:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:35:33.054448 | orchestrator | 2026-04-13 03:35:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:35:33.056641 | orchestrator | 2026-04-13 03:35:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:35:33.056670 | orchestrator | 2026-04-13 03:35:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:35:36.112139 | orchestrator | 2026-04-13 03:35:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:35:36.113944 | orchestrator | 2026-04-13 03:35:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:35:36.113980 | orchestrator | 2026-04-13 03:35:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:35:39.166792 | orchestrator | 2026-04-13 03:35:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:35:39.168361 | orchestrator | 2026-04-13 03:35:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:35:39.168402 | orchestrator | 2026-04-13 03:35:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:35:42.223890 | orchestrator | 2026-04-13 03:35:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:35:42.225655 | orchestrator | 2026-04-13 03:35:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:35:42.225750 | orchestrator | 2026-04-13 03:35:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:35:45.278713 | orchestrator | 2026-04-13 03:35:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:35:45.281519 | orchestrator | 2026-04-13 03:35:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:35:45.281711 | orchestrator | 2026-04-13 03:35:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:35:48.333112 | orchestrator | 2026-04-13 03:35:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:35:48.335132 | orchestrator | 2026-04-13 03:35:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:35:48.335170 | orchestrator | 2026-04-13 03:35:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:35:51.391106 | orchestrator | 2026-04-13 03:35:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:35:51.393237 | orchestrator | 2026-04-13 03:35:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:35:51.393383 | orchestrator | 2026-04-13 03:35:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:35:54.447702 | orchestrator | 2026-04-13 03:35:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:35:54.449639 | orchestrator | 2026-04-13 03:35:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:35:54.449709 | orchestrator | 2026-04-13 03:35:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:35:57.504178 | orchestrator | 2026-04-13 03:35:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:35:57.505755 | orchestrator | 2026-04-13 03:35:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:35:57.505792 | orchestrator | 2026-04-13 03:35:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:36:00.563406 | orchestrator | 2026-04-13 03:36:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:36:00.565007 | orchestrator | 2026-04-13 03:36:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:36:00.565062 | orchestrator | 2026-04-13 03:36:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:36:03.627306 | orchestrator | 2026-04-13 03:36:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:36:03.628085 | orchestrator | 2026-04-13 03:36:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:36:03.628120 | orchestrator | 2026-04-13 03:36:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:36:06.680409 | orchestrator | 2026-04-13 03:36:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:36:06.682372 | orchestrator | 2026-04-13 03:36:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:36:06.682483 | orchestrator | 2026-04-13 03:36:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:36:09.725668 | orchestrator | 2026-04-13 03:36:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:36:09.729027 | orchestrator | 2026-04-13 03:36:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:36:09.729116 | orchestrator | 2026-04-13 03:36:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:36:12.777454 | orchestrator | 2026-04-13 03:36:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:36:12.779302 | orchestrator | 2026-04-13 03:36:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:36:12.779348 | orchestrator | 2026-04-13 03:36:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:36:15.830770 | orchestrator | 2026-04-13 03:36:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:36:15.833053 | orchestrator | 2026-04-13 03:36:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:36:15.833115 | orchestrator | 2026-04-13 03:36:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:36:18.885950 | orchestrator | 2026-04-13 03:36:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:36:18.888798 | orchestrator | 2026-04-13 03:36:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:36:18.888867 | orchestrator | 2026-04-13 03:36:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:36:21.937766 | orchestrator | 2026-04-13 03:36:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:36:21.940672 | orchestrator | 2026-04-13 03:36:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:36:21.940728 | orchestrator | 2026-04-13 03:36:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:36:24.995268 | orchestrator | 2026-04-13 03:36:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:36:24.998182 | orchestrator | 2026-04-13 03:36:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:36:24.998224 | orchestrator | 2026-04-13 03:36:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:36:28.058675 | orchestrator | 2026-04-13 03:36:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:36:28.061023 | orchestrator | 2026-04-13 03:36:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:36:28.061095 | orchestrator | 2026-04-13 03:36:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:36:31.123911 | orchestrator | 2026-04-13 03:36:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:36:31.125657 | orchestrator | 2026-04-13 03:36:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:36:31.125870 | orchestrator | 2026-04-13 03:36:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:36:34.171403 | orchestrator | 2026-04-13 03:36:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:36:34.171878 | orchestrator | 2026-04-13 03:36:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:36:34.171908 | orchestrator | 2026-04-13 03:36:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:36:37.220513 | orchestrator | 2026-04-13 03:36:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:36:37.223178 | orchestrator | 2026-04-13 03:36:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:36:37.223269 | orchestrator | 2026-04-13 03:36:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:36:40.275965 | orchestrator | 2026-04-13 03:36:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:36:40.277999 | orchestrator | 2026-04-13 03:36:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:36:40.278135 | orchestrator | 2026-04-13 03:36:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:36:43.324421 | orchestrator | 2026-04-13 03:36:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:36:43.324656 | orchestrator | 2026-04-13 03:36:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:36:43.324685 | orchestrator | 2026-04-13 03:36:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:36:46.377811 | orchestrator | 2026-04-13 03:36:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:36:46.379675 | orchestrator | 2026-04-13 03:36:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:36:46.379754 | orchestrator | 2026-04-13 03:36:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:36:49.425811 | orchestrator | 2026-04-13 03:36:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:36:49.426796 | orchestrator | 2026-04-13 03:36:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:36:49.427200 | orchestrator | 2026-04-13 03:36:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:36:52.481287 | orchestrator | 2026-04-13 03:36:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:36:52.482583 | orchestrator | 2026-04-13 03:36:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:36:52.482615 | orchestrator | 2026-04-13 03:36:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:36:55.530827 | orchestrator | 2026-04-13 03:36:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:36:55.533568 | orchestrator | 2026-04-13 03:36:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:36:55.533612 | orchestrator | 2026-04-13 03:36:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:36:58.583127 | orchestrator | 2026-04-13 03:36:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:36:58.585758 | orchestrator | 2026-04-13 03:36:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:36:58.585826 | orchestrator | 2026-04-13 03:36:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:37:01.635336 | orchestrator | 2026-04-13 03:37:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:37:01.637378 | orchestrator | 2026-04-13 03:37:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:37:01.637432 | orchestrator | 2026-04-13 03:37:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:37:04.705257 | orchestrator | 2026-04-13 03:37:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:37:04.707779 | orchestrator | 2026-04-13 03:37:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:37:04.707832 | orchestrator | 2026-04-13 03:37:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:37:07.766580 | orchestrator | 2026-04-13 03:37:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:37:07.770108 | orchestrator | 2026-04-13 03:37:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:37:07.770231 | orchestrator | 2026-04-13 03:37:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:37:10.821051 | orchestrator | 2026-04-13 03:37:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:37:10.822323 | orchestrator | 2026-04-13 03:37:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:37:10.822879 | orchestrator | 2026-04-13 03:37:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:37:13.872127 | orchestrator | 2026-04-13 03:37:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:37:13.877832 | orchestrator | 2026-04-13 03:37:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:37:13.877894 | orchestrator | 2026-04-13 03:37:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:37:16.932064 | orchestrator | 2026-04-13 03:37:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:37:16.934962 | orchestrator | 2026-04-13 03:37:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:37:16.935027 | orchestrator | 2026-04-13 03:37:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:37:19.982325 | orchestrator | 2026-04-13 03:37:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:37:19.983223 | orchestrator | 2026-04-13 03:37:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:37:19.983415 | orchestrator | 2026-04-13 03:37:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:37:23.039892 | orchestrator | 2026-04-13 03:37:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:37:23.041628 | orchestrator | 2026-04-13 03:37:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:37:23.041691 | orchestrator | 2026-04-13 03:37:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:37:26.092300 | orchestrator | 2026-04-13 03:37:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:37:26.094432 | orchestrator | 2026-04-13 03:37:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:37:26.094583 | orchestrator | 2026-04-13 03:37:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:37:29.145160 | orchestrator | 2026-04-13 03:37:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:37:29.149535 | orchestrator | 2026-04-13 03:37:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:37:29.149633 | orchestrator | 2026-04-13 03:37:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:37:32.202074 | orchestrator | 2026-04-13 03:37:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:37:32.203616 | orchestrator | 2026-04-13 03:37:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:37:32.203777 | orchestrator | 2026-04-13 03:37:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:37:35.262542 | orchestrator | 2026-04-13 03:37:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:37:35.263667 | orchestrator | 2026-04-13 03:37:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:37:35.263712 | orchestrator | 2026-04-13 03:37:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:37:38.322726 | orchestrator | 2026-04-13 03:37:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:37:38.324271 | orchestrator | 2026-04-13 03:37:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:37:38.324358 | orchestrator | 2026-04-13 03:37:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:37:41.374688 | orchestrator | 2026-04-13 03:37:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:37:41.377036 | orchestrator | 2026-04-13 03:37:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:37:41.377090 | orchestrator | 2026-04-13 03:37:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:37:44.434809 | orchestrator | 2026-04-13 03:37:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:37:44.437401 | orchestrator | 2026-04-13 03:37:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:37:44.437742 | orchestrator | 2026-04-13 03:37:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:37:47.489870 | orchestrator | 2026-04-13 03:37:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:37:47.492072 | orchestrator | 2026-04-13 03:37:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:37:47.492143 | orchestrator | 2026-04-13 03:37:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:37:50.551421 | orchestrator | 2026-04-13 03:37:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:37:50.556151 | orchestrator | 2026-04-13 03:37:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:37:50.556220 | orchestrator | 2026-04-13 03:37:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:37:53.608750 | orchestrator | 2026-04-13 03:37:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:37:53.612956 | orchestrator | 2026-04-13 03:37:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:37:53.613026 | orchestrator | 2026-04-13 03:37:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:37:56.667990 | orchestrator | 2026-04-13 03:37:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:37:56.670000 | orchestrator | 2026-04-13 03:37:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:37:56.670140 | orchestrator | 2026-04-13 03:37:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:37:59.729053 | orchestrator | 2026-04-13 03:37:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:37:59.732299 | orchestrator | 2026-04-13 03:37:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:37:59.732377 | orchestrator | 2026-04-13 03:37:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:38:02.782912 | orchestrator | 2026-04-13 03:38:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:38:02.784799 | orchestrator | 2026-04-13 03:38:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:38:02.784855 | orchestrator | 2026-04-13 03:38:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:38:05.839431 | orchestrator | 2026-04-13 03:38:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:38:05.842882 | orchestrator | 2026-04-13 03:38:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:38:05.842947 | orchestrator | 2026-04-13 03:38:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:38:08.909132 | orchestrator | 2026-04-13 03:38:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:38:08.909227 | orchestrator | 2026-04-13 03:38:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:38:08.909239 | orchestrator | 2026-04-13 03:38:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:38:11.963819 | orchestrator | 2026-04-13 03:38:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:38:11.966586 | orchestrator | 2026-04-13 03:38:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:38:11.966782 | orchestrator | 2026-04-13 03:38:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:38:15.023691 | orchestrator | 2026-04-13 03:38:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:38:15.026557 | orchestrator | 2026-04-13 03:38:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:38:15.026634 | orchestrator | 2026-04-13 03:38:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:38:18.100385 | orchestrator | 2026-04-13 03:38:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:38:18.101418 | orchestrator | 2026-04-13 03:38:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:38:18.101927 | orchestrator | 2026-04-13 03:38:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:38:21.167121 | orchestrator | 2026-04-13 03:38:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:38:21.168580 | orchestrator | 2026-04-13 03:38:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:38:21.168611 | orchestrator | 2026-04-13 03:38:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:38:24.219656 | orchestrator | 2026-04-13 03:38:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:38:24.222531 | orchestrator | 2026-04-13 03:38:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:38:24.222584 | orchestrator | 2026-04-13 03:38:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:38:27.297873 | orchestrator | 2026-04-13 03:38:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:38:27.299395 | orchestrator | 2026-04-13 03:38:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:38:27.299441 | orchestrator | 2026-04-13 03:38:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:38:30.340477 | orchestrator | 2026-04-13 03:38:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:38:30.341822 | orchestrator | 2026-04-13 03:38:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:38:30.341870 | orchestrator | 2026-04-13 03:38:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:38:33.399664 | orchestrator | 2026-04-13 03:38:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:38:33.399774 | orchestrator | 2026-04-13 03:38:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:38:33.399789 | orchestrator | 2026-04-13 03:38:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:38:36.469251 | orchestrator | 2026-04-13 03:38:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:38:36.470831 | orchestrator | 2026-04-13 03:38:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:38:36.470877 | orchestrator | 2026-04-13 03:38:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:38:39.535132 | orchestrator | 2026-04-13 03:38:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:38:39.536432 | orchestrator | 2026-04-13 03:38:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:38:39.536509 | orchestrator | 2026-04-13 03:38:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:38:42.586330 | orchestrator | 2026-04-13 03:38:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:38:42.588605 | orchestrator | 2026-04-13 03:38:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:38:42.588786 | orchestrator | 2026-04-13 03:38:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:38:45.645094 | orchestrator | 2026-04-13 03:38:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:38:45.646685 | orchestrator | 2026-04-13 03:38:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:38:45.646727 | orchestrator | 2026-04-13 03:38:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:38:48.707353 | orchestrator | 2026-04-13 03:38:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:38:48.709258 | orchestrator | 2026-04-13 03:38:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:38:48.709329 | orchestrator | 2026-04-13 03:38:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:38:51.764100 | orchestrator | 2026-04-13 03:38:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:38:51.767055 | orchestrator | 2026-04-13 03:38:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:38:51.767135 | orchestrator | 2026-04-13 03:38:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:38:54.819165 | orchestrator | 2026-04-13 03:38:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:38:54.820869 | orchestrator | 2026-04-13 03:38:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:38:54.820935 | orchestrator | 2026-04-13 03:38:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:38:57.878714 | orchestrator | 2026-04-13 03:38:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:38:57.881749 | orchestrator | 2026-04-13 03:38:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:38:57.881801 | orchestrator | 2026-04-13 03:38:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:39:00.932667 | orchestrator | 2026-04-13 03:39:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:39:00.933866 | orchestrator | 2026-04-13 03:39:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:39:00.934068 | orchestrator | 2026-04-13 03:39:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:39:03.985673 | orchestrator | 2026-04-13 03:39:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:39:03.986847 | orchestrator | 2026-04-13 03:39:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:39:03.986889 | orchestrator | 2026-04-13 03:39:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:39:07.035785 | orchestrator | 2026-04-13 03:39:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:39:07.039039 | orchestrator | 2026-04-13 03:39:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:39:07.039117 | orchestrator | 2026-04-13 03:39:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:39:10.093317 | orchestrator | 2026-04-13 03:39:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:39:10.095525 | orchestrator | 2026-04-13 03:39:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:39:10.095618 | orchestrator | 2026-04-13 03:39:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:39:13.151424 | orchestrator | 2026-04-13 03:39:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:39:13.154922 | orchestrator | 2026-04-13 03:39:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:39:13.154996 | orchestrator | 2026-04-13 03:39:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:39:16.205403 | orchestrator | 2026-04-13 03:39:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:39:16.207412 | orchestrator | 2026-04-13 03:39:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:39:16.207486 | orchestrator | 2026-04-13 03:39:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:39:19.261454 | orchestrator | 2026-04-13 03:39:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:39:19.262511 | orchestrator | 2026-04-13 03:39:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:39:19.262561 | orchestrator | 2026-04-13 03:39:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:39:22.316462 | orchestrator | 2026-04-13 03:39:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:39:22.318660 | orchestrator | 2026-04-13 03:39:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:39:22.318729 | orchestrator | 2026-04-13 03:39:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:39:25.377047 | orchestrator | 2026-04-13 03:39:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:39:25.380303 | orchestrator | 2026-04-13 03:39:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:39:25.380359 | orchestrator | 2026-04-13 03:39:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:39:28.427430 | orchestrator | 2026-04-13 03:39:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:39:28.429496 | orchestrator | 2026-04-13 03:39:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:39:28.429556 | orchestrator | 2026-04-13 03:39:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:39:31.483362 | orchestrator | 2026-04-13 03:39:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:39:31.484667 | orchestrator | 2026-04-13 03:39:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:39:31.484721 | orchestrator | 2026-04-13 03:39:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:39:34.537864 | orchestrator | 2026-04-13 03:39:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:39:34.539475 | orchestrator | 2026-04-13 03:39:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:39:34.539532 | orchestrator | 2026-04-13 03:39:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:39:37.590553 | orchestrator | 2026-04-13 03:39:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:39:37.592604 | orchestrator | 2026-04-13 03:39:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:39:37.592655 | orchestrator | 2026-04-13 03:39:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:39:40.638011 | orchestrator | 2026-04-13 03:39:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:39:40.639025 | orchestrator | 2026-04-13 03:39:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:39:40.639080 | orchestrator | 2026-04-13 03:39:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:39:43.680538 | orchestrator | 2026-04-13 03:39:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:39:43.682639 | orchestrator | 2026-04-13 03:39:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:39:43.682696 | orchestrator | 2026-04-13 03:39:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:39:46.736796 | orchestrator | 2026-04-13 03:39:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:39:46.739172 | orchestrator | 2026-04-13 03:39:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:39:46.739233 | orchestrator | 2026-04-13 03:39:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:39:49.786461 | orchestrator | 2026-04-13 03:39:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:39:49.787911 | orchestrator | 2026-04-13 03:39:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:39:49.788139 | orchestrator | 2026-04-13 03:39:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:39:52.830465 | orchestrator | 2026-04-13 03:39:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:39:52.831679 | orchestrator | 2026-04-13 03:39:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:39:52.831772 | orchestrator | 2026-04-13 03:39:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:39:55.883315 | orchestrator | 2026-04-13 03:39:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:39:55.885858 | orchestrator | 2026-04-13 03:39:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:39:55.885912 | orchestrator | 2026-04-13 03:39:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:39:58.938502 | orchestrator | 2026-04-13 03:39:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:39:58.939927 | orchestrator | 2026-04-13 03:39:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:39:58.939972 | orchestrator | 2026-04-13 03:39:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:40:01.993455 | orchestrator | 2026-04-13 03:40:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:40:01.994902 | orchestrator | 2026-04-13 03:40:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:40:01.994957 | orchestrator | 2026-04-13 03:40:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:40:05.050910 | orchestrator | 2026-04-13 03:40:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:40:05.053675 | orchestrator | 2026-04-13 03:40:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:40:05.053730 | orchestrator | 2026-04-13 03:40:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:40:08.106081 | orchestrator | 2026-04-13 03:40:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:40:08.106851 | orchestrator | 2026-04-13 03:40:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:40:08.106889 | orchestrator | 2026-04-13 03:40:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:40:11.158684 | orchestrator | 2026-04-13 03:40:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:40:11.161385 | orchestrator | 2026-04-13 03:40:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:40:11.161448 | orchestrator | 2026-04-13 03:40:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:40:14.221551 | orchestrator | 2026-04-13 03:40:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:40:14.223618 | orchestrator | 2026-04-13 03:40:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:40:14.223688 | orchestrator | 2026-04-13 03:40:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:40:17.271372 | orchestrator | 2026-04-13 03:40:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:40:17.273026 | orchestrator | 2026-04-13 03:40:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:40:17.273094 | orchestrator | 2026-04-13 03:40:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:40:20.324356 | orchestrator | 2026-04-13 03:40:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:40:20.326209 | orchestrator | 2026-04-13 03:40:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:40:20.326256 | orchestrator | 2026-04-13 03:40:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:40:23.380816 | orchestrator | 2026-04-13 03:40:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:40:23.382927 | orchestrator | 2026-04-13 03:40:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:40:23.382993 | orchestrator | 2026-04-13 03:40:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:40:26.438330 | orchestrator | 2026-04-13 03:40:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:40:26.439436 | orchestrator | 2026-04-13 03:40:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:40:26.439513 | orchestrator | 2026-04-13 03:40:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:40:29.485443 | orchestrator | 2026-04-13 03:40:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:40:29.487268 | orchestrator | 2026-04-13 03:40:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:40:29.487339 | orchestrator | 2026-04-13 03:40:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:40:32.532536 | orchestrator | 2026-04-13 03:40:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:40:32.532660 | orchestrator | 2026-04-13 03:40:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:40:32.532674 | orchestrator | 2026-04-13 03:40:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:40:35.583649 | orchestrator | 2026-04-13 03:40:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:40:35.585875 | orchestrator | 2026-04-13 03:40:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:40:35.585917 | orchestrator | 2026-04-13 03:40:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:40:38.632569 | orchestrator | 2026-04-13 03:40:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:40:38.634781 | orchestrator | 2026-04-13 03:40:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:40:38.634932 | orchestrator | 2026-04-13 03:40:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:40:41.692072 | orchestrator | 2026-04-13 03:40:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:40:41.693586 | orchestrator | 2026-04-13 03:40:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:40:41.693622 | orchestrator | 2026-04-13 03:40:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:40:44.744012 | orchestrator | 2026-04-13 03:40:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:40:44.746270 | orchestrator | 2026-04-13 03:40:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:40:44.746353 | orchestrator | 2026-04-13 03:40:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:40:47.801539 | orchestrator | 2026-04-13 03:40:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:40:47.803603 | orchestrator | 2026-04-13 03:40:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:40:47.803838 | orchestrator | 2026-04-13 03:40:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:40:50.849909 | orchestrator | 2026-04-13 03:40:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:40:50.853080 | orchestrator | 2026-04-13 03:40:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:40:50.853133 | orchestrator | 2026-04-13 03:40:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:40:53.905621 | orchestrator | 2026-04-13 03:40:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:40:53.908416 | orchestrator | 2026-04-13 03:40:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:40:53.908487 | orchestrator | 2026-04-13 03:40:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:40:56.961831 | orchestrator | 2026-04-13 03:40:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:40:56.966156 | orchestrator | 2026-04-13 03:40:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:40:56.966241 | orchestrator | 2026-04-13 03:40:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:41:00.016135 | orchestrator | 2026-04-13 03:41:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:41:00.018642 | orchestrator | 2026-04-13 03:41:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:41:00.018691 | orchestrator | 2026-04-13 03:41:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:41:03.062493 | orchestrator | 2026-04-13 03:41:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:41:03.065077 | orchestrator | 2026-04-13 03:41:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:41:03.065134 | orchestrator | 2026-04-13 03:41:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:41:06.112976 | orchestrator | 2026-04-13 03:41:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:41:06.115396 | orchestrator | 2026-04-13 03:41:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:41:06.115440 | orchestrator | 2026-04-13 03:41:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:41:09.160378 | orchestrator | 2026-04-13 03:41:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:41:09.162152 | orchestrator | 2026-04-13 03:41:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:41:09.162199 | orchestrator | 2026-04-13 03:41:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:41:12.203782 | orchestrator | 2026-04-13 03:41:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:41:12.205060 | orchestrator | 2026-04-13 03:41:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:41:12.205131 | orchestrator | 2026-04-13 03:41:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:41:15.242086 | orchestrator | 2026-04-13 03:41:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:41:15.242744 | orchestrator | 2026-04-13 03:41:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:41:15.242800 | orchestrator | 2026-04-13 03:41:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:41:18.299336 | orchestrator | 2026-04-13 03:41:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:41:18.301237 | orchestrator | 2026-04-13 03:41:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:41:18.301311 | orchestrator | 2026-04-13 03:41:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:41:21.348854 | orchestrator | 2026-04-13 03:41:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:41:21.350978 | orchestrator | 2026-04-13 03:41:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:41:21.351054 | orchestrator | 2026-04-13 03:41:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:41:24.400244 | orchestrator | 2026-04-13 03:41:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:41:24.401016 | orchestrator | 2026-04-13 03:41:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:41:24.401303 | orchestrator | 2026-04-13 03:41:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:41:27.455689 | orchestrator | 2026-04-13 03:41:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:41:27.456955 | orchestrator | 2026-04-13 03:41:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:41:27.457024 | orchestrator | 2026-04-13 03:41:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:41:30.505921 | orchestrator | 2026-04-13 03:41:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:41:30.506788 | orchestrator | 2026-04-13 03:41:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:41:30.506829 | orchestrator | 2026-04-13 03:41:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:41:33.561907 | orchestrator | 2026-04-13 03:41:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:41:33.564701 | orchestrator | 2026-04-13 03:41:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:41:33.564790 | orchestrator | 2026-04-13 03:41:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:41:36.612800 | orchestrator | 2026-04-13 03:41:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:41:36.614971 | orchestrator | 2026-04-13 03:41:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:41:36.615029 | orchestrator | 2026-04-13 03:41:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:41:39.660321 | orchestrator | 2026-04-13 03:41:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:41:39.661740 | orchestrator | 2026-04-13 03:41:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:41:39.662104 | orchestrator | 2026-04-13 03:41:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:41:42.707552 | orchestrator | 2026-04-13 03:41:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:41:42.708911 | orchestrator | 2026-04-13 03:41:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:41:42.708976 | orchestrator | 2026-04-13 03:41:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:41:45.763448 | orchestrator | 2026-04-13 03:41:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:41:45.764907 | orchestrator | 2026-04-13 03:41:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:41:45.764957 | orchestrator | 2026-04-13 03:41:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:41:48.823654 | orchestrator | 2026-04-13 03:41:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:41:48.825465 | orchestrator | 2026-04-13 03:41:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:41:48.825545 | orchestrator | 2026-04-13 03:41:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:41:51.876243 | orchestrator | 2026-04-13 03:41:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:41:51.878063 | orchestrator | 2026-04-13 03:41:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:41:51.878108 | orchestrator | 2026-04-13 03:41:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:41:54.930399 | orchestrator | 2026-04-13 03:41:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:41:54.931651 | orchestrator | 2026-04-13 03:41:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:41:54.931687 | orchestrator | 2026-04-13 03:41:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:41:57.978501 | orchestrator | 2026-04-13 03:41:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:41:57.980341 | orchestrator | 2026-04-13 03:41:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:41:57.980548 | orchestrator | 2026-04-13 03:41:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:42:01.025580 | orchestrator | 2026-04-13 03:42:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:42:01.028462 | orchestrator | 2026-04-13 03:42:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:42:01.028534 | orchestrator | 2026-04-13 03:42:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:42:04.078494 | orchestrator | 2026-04-13 03:42:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:42:04.080235 | orchestrator | 2026-04-13 03:42:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:42:04.080379 | orchestrator | 2026-04-13 03:42:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:42:07.131870 | orchestrator | 2026-04-13 03:42:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:42:07.133992 | orchestrator | 2026-04-13 03:42:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:42:07.134242 | orchestrator | 2026-04-13 03:42:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:42:10.182746 | orchestrator | 2026-04-13 03:42:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:42:10.185422 | orchestrator | 2026-04-13 03:42:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:42:10.185463 | orchestrator | 2026-04-13 03:42:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:42:13.235361 | orchestrator | 2026-04-13 03:42:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:42:13.237783 | orchestrator | 2026-04-13 03:42:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:42:13.237828 | orchestrator | 2026-04-13 03:42:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:42:16.281929 | orchestrator | 2026-04-13 03:42:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:42:16.284961 | orchestrator | 2026-04-13 03:42:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:42:16.285044 | orchestrator | 2026-04-13 03:42:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:42:19.346186 | orchestrator | 2026-04-13 03:42:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:42:19.348159 | orchestrator | 2026-04-13 03:42:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:42:19.348210 | orchestrator | 2026-04-13 03:42:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:42:22.403197 | orchestrator | 2026-04-13 03:42:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:42:22.406951 | orchestrator | 2026-04-13 03:42:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:42:22.407037 | orchestrator | 2026-04-13 03:42:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:42:25.458367 | orchestrator | 2026-04-13 03:42:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:42:25.461237 | orchestrator | 2026-04-13 03:42:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:42:25.461533 | orchestrator | 2026-04-13 03:42:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:42:28.512737 | orchestrator | 2026-04-13 03:42:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:42:28.514828 | orchestrator | 2026-04-13 03:42:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:42:28.514871 | orchestrator | 2026-04-13 03:42:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:42:31.559203 | orchestrator | 2026-04-13 03:42:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:42:31.561859 | orchestrator | 2026-04-13 03:42:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:42:31.561935 | orchestrator | 2026-04-13 03:42:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:42:34.612702 | orchestrator | 2026-04-13 03:42:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:42:34.614462 | orchestrator | 2026-04-13 03:42:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:42:34.614520 | orchestrator | 2026-04-13 03:42:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:42:37.668869 | orchestrator | 2026-04-13 03:42:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:42:37.671660 | orchestrator | 2026-04-13 03:42:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:42:37.671726 | orchestrator | 2026-04-13 03:42:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:42:40.722314 | orchestrator | 2026-04-13 03:42:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:42:40.723993 | orchestrator | 2026-04-13 03:42:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:42:40.724027 | orchestrator | 2026-04-13 03:42:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:42:43.771194 | orchestrator | 2026-04-13 03:42:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:42:43.772202 | orchestrator | 2026-04-13 03:42:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:42:43.772363 | orchestrator | 2026-04-13 03:42:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:42:46.821806 | orchestrator | 2026-04-13 03:42:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:42:46.823719 | orchestrator | 2026-04-13 03:42:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:42:46.823788 | orchestrator | 2026-04-13 03:42:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:42:49.875618 | orchestrator | 2026-04-13 03:42:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:42:49.876809 | orchestrator | 2026-04-13 03:42:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:42:49.876844 | orchestrator | 2026-04-13 03:42:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:42:52.931292 | orchestrator | 2026-04-13 03:42:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:42:52.933071 | orchestrator | 2026-04-13 03:42:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:42:52.933110 | orchestrator | 2026-04-13 03:42:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:42:55.985086 | orchestrator | 2026-04-13 03:42:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:42:55.986777 | orchestrator | 2026-04-13 03:42:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:42:55.987121 | orchestrator | 2026-04-13 03:42:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:42:59.034007 | orchestrator | 2026-04-13 03:42:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:42:59.035455 | orchestrator | 2026-04-13 03:42:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:42:59.035541 | orchestrator | 2026-04-13 03:42:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:43:02.079721 | orchestrator | 2026-04-13 03:43:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:43:02.080797 | orchestrator | 2026-04-13 03:43:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:43:02.080831 | orchestrator | 2026-04-13 03:43:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:43:05.129782 | orchestrator | 2026-04-13 03:43:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:43:05.131672 | orchestrator | 2026-04-13 03:43:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:43:05.131759 | orchestrator | 2026-04-13 03:43:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:43:08.186614 | orchestrator | 2026-04-13 03:43:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:43:08.189507 | orchestrator | 2026-04-13 03:43:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:43:08.189589 | orchestrator | 2026-04-13 03:43:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:43:11.242562 | orchestrator | 2026-04-13 03:43:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:43:11.245569 | orchestrator | 2026-04-13 03:43:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:43:11.245655 | orchestrator | 2026-04-13 03:43:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:43:14.298586 | orchestrator | 2026-04-13 03:43:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:43:14.300469 | orchestrator | 2026-04-13 03:43:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:43:14.300562 | orchestrator | 2026-04-13 03:43:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:43:17.354516 | orchestrator | 2026-04-13 03:43:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:43:17.357568 | orchestrator | 2026-04-13 03:43:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:43:17.357635 | orchestrator | 2026-04-13 03:43:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:43:20.398515 | orchestrator | 2026-04-13 03:43:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:43:20.400973 | orchestrator | 2026-04-13 03:43:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:43:20.401036 | orchestrator | 2026-04-13 03:43:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:43:23.453391 | orchestrator | 2026-04-13 03:43:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:43:23.455121 | orchestrator | 2026-04-13 03:43:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:43:23.455178 | orchestrator | 2026-04-13 03:43:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:43:26.501762 | orchestrator | 2026-04-13 03:43:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:43:26.503327 | orchestrator | 2026-04-13 03:43:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:43:26.503358 | orchestrator | 2026-04-13 03:43:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:43:29.552002 | orchestrator | 2026-04-13 03:43:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:43:29.556273 | orchestrator | 2026-04-13 03:43:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:43:29.556336 | orchestrator | 2026-04-13 03:43:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:43:32.600280 | orchestrator | 2026-04-13 03:43:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:43:32.601763 | orchestrator | 2026-04-13 03:43:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:43:32.601821 | orchestrator | 2026-04-13 03:43:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:43:35.645624 | orchestrator | 2026-04-13 03:43:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:43:35.648564 | orchestrator | 2026-04-13 03:43:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:43:35.648651 | orchestrator | 2026-04-13 03:43:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:43:38.694602 | orchestrator | 2026-04-13 03:43:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:43:38.696859 | orchestrator | 2026-04-13 03:43:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:43:38.696920 | orchestrator | 2026-04-13 03:43:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:43:41.743080 | orchestrator | 2026-04-13 03:43:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:43:41.745064 | orchestrator | 2026-04-13 03:43:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:43:41.745118 | orchestrator | 2026-04-13 03:43:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:43:44.796645 | orchestrator | 2026-04-13 03:43:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:43:44.798475 | orchestrator | 2026-04-13 03:43:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:43:44.798517 | orchestrator | 2026-04-13 03:43:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:43:47.849604 | orchestrator | 2026-04-13 03:43:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:43:47.850395 | orchestrator | 2026-04-13 03:43:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:43:47.850615 | orchestrator | 2026-04-13 03:43:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:43:50.894537 | orchestrator | 2026-04-13 03:43:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:43:50.896247 | orchestrator | 2026-04-13 03:43:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:43:50.896321 | orchestrator | 2026-04-13 03:43:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:43:53.947749 | orchestrator | 2026-04-13 03:43:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:43:53.949294 | orchestrator | 2026-04-13 03:43:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:43:53.949334 | orchestrator | 2026-04-13 03:43:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:43:56.998832 | orchestrator | 2026-04-13 03:43:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:43:57.001556 | orchestrator | 2026-04-13 03:43:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:43:57.001762 | orchestrator | 2026-04-13 03:43:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:44:00.042951 | orchestrator | 2026-04-13 03:44:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:44:00.044771 | orchestrator | 2026-04-13 03:44:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:44:00.044844 | orchestrator | 2026-04-13 03:44:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:44:03.089691 | orchestrator | 2026-04-13 03:44:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:44:03.091525 | orchestrator | 2026-04-13 03:44:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:44:03.091622 | orchestrator | 2026-04-13 03:44:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:44:06.139912 | orchestrator | 2026-04-13 03:44:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:44:06.142569 | orchestrator | 2026-04-13 03:44:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:44:06.142623 | orchestrator | 2026-04-13 03:44:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:44:09.187324 | orchestrator | 2026-04-13 03:44:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:44:09.188688 | orchestrator | 2026-04-13 03:44:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:44:09.188749 | orchestrator | 2026-04-13 03:44:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:44:12.233797 | orchestrator | 2026-04-13 03:44:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:44:12.237048 | orchestrator | 2026-04-13 03:44:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:44:12.237287 | orchestrator | 2026-04-13 03:44:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:44:15.286994 | orchestrator | 2026-04-13 03:44:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:44:15.289412 | orchestrator | 2026-04-13 03:44:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:44:15.289493 | orchestrator | 2026-04-13 03:44:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:44:18.339348 | orchestrator | 2026-04-13 03:44:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:44:18.342312 | orchestrator | 2026-04-13 03:44:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:44:18.342371 | orchestrator | 2026-04-13 03:44:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:44:21.394826 | orchestrator | 2026-04-13 03:44:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:44:21.396747 | orchestrator | 2026-04-13 03:44:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:44:21.397099 | orchestrator | 2026-04-13 03:44:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:44:24.458733 | orchestrator | 2026-04-13 03:44:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:44:24.463030 | orchestrator | 2026-04-13 03:44:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:44:24.463104 | orchestrator | 2026-04-13 03:44:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:44:27.510097 | orchestrator | 2026-04-13 03:44:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:44:27.513253 | orchestrator | 2026-04-13 03:44:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:44:27.513297 | orchestrator | 2026-04-13 03:44:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:44:30.561672 | orchestrator | 2026-04-13 03:44:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:44:30.565182 | orchestrator | 2026-04-13 03:44:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:44:30.565262 | orchestrator | 2026-04-13 03:44:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:44:33.615958 | orchestrator | 2026-04-13 03:44:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:44:33.617290 | orchestrator | 2026-04-13 03:44:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:44:33.617362 | orchestrator | 2026-04-13 03:44:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:44:36.671926 | orchestrator | 2026-04-13 03:44:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:44:36.676981 | orchestrator | 2026-04-13 03:44:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:44:36.677070 | orchestrator | 2026-04-13 03:44:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:44:39.736280 | orchestrator | 2026-04-13 03:44:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:44:39.738932 | orchestrator | 2026-04-13 03:44:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:44:39.739028 | orchestrator | 2026-04-13 03:44:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:44:42.780112 | orchestrator | 2026-04-13 03:44:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:44:42.782235 | orchestrator | 2026-04-13 03:44:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:44:42.782312 | orchestrator | 2026-04-13 03:44:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:44:45.832607 | orchestrator | 2026-04-13 03:44:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:44:45.835344 | orchestrator | 2026-04-13 03:44:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:44:45.835492 | orchestrator | 2026-04-13 03:44:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:44:48.887709 | orchestrator | 2026-04-13 03:44:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:44:48.890981 | orchestrator | 2026-04-13 03:44:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:44:48.891083 | orchestrator | 2026-04-13 03:44:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:44:51.942623 | orchestrator | 2026-04-13 03:44:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:44:51.943938 | orchestrator | 2026-04-13 03:44:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:44:51.944017 | orchestrator | 2026-04-13 03:44:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:44:54.994067 | orchestrator | 2026-04-13 03:44:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:44:54.996007 | orchestrator | 2026-04-13 03:44:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:44:54.996425 | orchestrator | 2026-04-13 03:44:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:44:58.045628 | orchestrator | 2026-04-13 03:44:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:44:58.048157 | orchestrator | 2026-04-13 03:44:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:44:58.048208 | orchestrator | 2026-04-13 03:44:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:45:01.094523 | orchestrator | 2026-04-13 03:45:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:45:01.097261 | orchestrator | 2026-04-13 03:45:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:45:01.097651 | orchestrator | 2026-04-13 03:45:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:45:04.144370 | orchestrator | 2026-04-13 03:45:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:45:04.144626 | orchestrator | 2026-04-13 03:45:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:45:04.145507 | orchestrator | 2026-04-13 03:45:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:45:07.190268 | orchestrator | 2026-04-13 03:45:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:45:07.192544 | orchestrator | 2026-04-13 03:45:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:45:07.192600 | orchestrator | 2026-04-13 03:45:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:45:10.242914 | orchestrator | 2026-04-13 03:45:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:45:10.244924 | orchestrator | 2026-04-13 03:45:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:45:10.244959 | orchestrator | 2026-04-13 03:45:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:45:13.277397 | orchestrator | 2026-04-13 03:45:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:45:13.278271 | orchestrator | 2026-04-13 03:45:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:45:13.278382 | orchestrator | 2026-04-13 03:45:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:45:16.328954 | orchestrator | 2026-04-13 03:45:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:45:16.330388 | orchestrator | 2026-04-13 03:45:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:45:16.330442 | orchestrator | 2026-04-13 03:45:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:45:19.379343 | orchestrator | 2026-04-13 03:45:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:45:19.381015 | orchestrator | 2026-04-13 03:45:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:45:19.381166 | orchestrator | 2026-04-13 03:45:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:45:22.430234 | orchestrator | 2026-04-13 03:45:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:45:22.432381 | orchestrator | 2026-04-13 03:45:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:45:22.432528 | orchestrator | 2026-04-13 03:45:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:45:25.488867 | orchestrator | 2026-04-13 03:45:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:45:25.492226 | orchestrator | 2026-04-13 03:45:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:45:25.492278 | orchestrator | 2026-04-13 03:45:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:45:28.536365 | orchestrator | 2026-04-13 03:45:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:45:28.537860 | orchestrator | 2026-04-13 03:45:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:45:28.537899 | orchestrator | 2026-04-13 03:45:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:45:31.578110 | orchestrator | 2026-04-13 03:45:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:45:31.578252 | orchestrator | 2026-04-13 03:45:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:45:31.578265 | orchestrator | 2026-04-13 03:45:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:45:34.625891 | orchestrator | 2026-04-13 03:45:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:45:34.629450 | orchestrator | 2026-04-13 03:45:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:45:34.629554 | orchestrator | 2026-04-13 03:45:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:45:37.678138 | orchestrator | 2026-04-13 03:45:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:45:37.679510 | orchestrator | 2026-04-13 03:45:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:45:37.679565 | orchestrator | 2026-04-13 03:45:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:45:40.729404 | orchestrator | 2026-04-13 03:45:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:45:40.731363 | orchestrator | 2026-04-13 03:45:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:45:40.731435 | orchestrator | 2026-04-13 03:45:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:45:43.787410 | orchestrator | 2026-04-13 03:45:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:45:43.789014 | orchestrator | 2026-04-13 03:45:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:45:43.789115 | orchestrator | 2026-04-13 03:45:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:45:46.842296 | orchestrator | 2026-04-13 03:45:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:45:46.846264 | orchestrator | 2026-04-13 03:45:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:45:46.846381 | orchestrator | 2026-04-13 03:45:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:45:49.896399 | orchestrator | 2026-04-13 03:45:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:45:49.898533 | orchestrator | 2026-04-13 03:45:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:45:49.898880 | orchestrator | 2026-04-13 03:45:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:45:52.949450 | orchestrator | 2026-04-13 03:45:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:45:52.952212 | orchestrator | 2026-04-13 03:45:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:45:52.952259 | orchestrator | 2026-04-13 03:45:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:45:56.007200 | orchestrator | 2026-04-13 03:45:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:45:56.008858 | orchestrator | 2026-04-13 03:45:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:45:56.008921 | orchestrator | 2026-04-13 03:45:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:45:59.063162 | orchestrator | 2026-04-13 03:45:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:45:59.064494 | orchestrator | 2026-04-13 03:45:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:45:59.064541 | orchestrator | 2026-04-13 03:45:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:46:02.123228 | orchestrator | 2026-04-13 03:46:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:46:02.125771 | orchestrator | 2026-04-13 03:46:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:46:02.125860 | orchestrator | 2026-04-13 03:46:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:46:05.171912 | orchestrator | 2026-04-13 03:46:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:46:05.172645 | orchestrator | 2026-04-13 03:46:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:46:05.172677 | orchestrator | 2026-04-13 03:46:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:46:08.212854 | orchestrator | 2026-04-13 03:46:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:46:08.214895 | orchestrator | 2026-04-13 03:46:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:46:08.214988 | orchestrator | 2026-04-13 03:46:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:46:11.267950 | orchestrator | 2026-04-13 03:46:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:46:11.271308 | orchestrator | 2026-04-13 03:46:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:46:11.271378 | orchestrator | 2026-04-13 03:46:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:46:14.317257 | orchestrator | 2026-04-13 03:46:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:46:14.319954 | orchestrator | 2026-04-13 03:46:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:46:14.320030 | orchestrator | 2026-04-13 03:46:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:46:17.377076 | orchestrator | 2026-04-13 03:46:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:46:17.378993 | orchestrator | 2026-04-13 03:46:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:46:17.379063 | orchestrator | 2026-04-13 03:46:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:46:20.433333 | orchestrator | 2026-04-13 03:46:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:46:20.435126 | orchestrator | 2026-04-13 03:46:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:46:20.435175 | orchestrator | 2026-04-13 03:46:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:46:23.479831 | orchestrator | 2026-04-13 03:46:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:46:23.482739 | orchestrator | 2026-04-13 03:46:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:46:23.482916 | orchestrator | 2026-04-13 03:46:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:46:26.531459 | orchestrator | 2026-04-13 03:46:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:46:26.533223 | orchestrator | 2026-04-13 03:46:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:46:26.533405 | orchestrator | 2026-04-13 03:46:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:46:29.587073 | orchestrator | 2026-04-13 03:46:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:46:29.588575 | orchestrator | 2026-04-13 03:46:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:46:29.588644 | orchestrator | 2026-04-13 03:46:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:46:32.634750 | orchestrator | 2026-04-13 03:46:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:46:32.636366 | orchestrator | 2026-04-13 03:46:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:46:32.636526 | orchestrator | 2026-04-13 03:46:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:46:35.688692 | orchestrator | 2026-04-13 03:46:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:46:35.690440 | orchestrator | 2026-04-13 03:46:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:46:35.690512 | orchestrator | 2026-04-13 03:46:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:46:38.741105 | orchestrator | 2026-04-13 03:46:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:46:38.743644 | orchestrator | 2026-04-13 03:46:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:46:38.743698 | orchestrator | 2026-04-13 03:46:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:46:41.789591 | orchestrator | 2026-04-13 03:46:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:46:41.790835 | orchestrator | 2026-04-13 03:46:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:46:41.790882 | orchestrator | 2026-04-13 03:46:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:46:44.840441 | orchestrator | 2026-04-13 03:46:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:46:44.843069 | orchestrator | 2026-04-13 03:46:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:46:44.843144 | orchestrator | 2026-04-13 03:46:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:46:47.895628 | orchestrator | 2026-04-13 03:46:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:46:47.897506 | orchestrator | 2026-04-13 03:46:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:46:47.897595 | orchestrator | 2026-04-13 03:46:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:46:50.946371 | orchestrator | 2026-04-13 03:46:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:46:50.948008 | orchestrator | 2026-04-13 03:46:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:46:50.948102 | orchestrator | 2026-04-13 03:46:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:46:53.996237 | orchestrator | 2026-04-13 03:46:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:46:53.999128 | orchestrator | 2026-04-13 03:46:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:46:53.999214 | orchestrator | 2026-04-13 03:46:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:46:57.048023 | orchestrator | 2026-04-13 03:46:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:46:57.049765 | orchestrator | 2026-04-13 03:46:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:46:57.049845 | orchestrator | 2026-04-13 03:46:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:47:00.091341 | orchestrator | 2026-04-13 03:47:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:47:00.094263 | orchestrator | 2026-04-13 03:47:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:47:00.094347 | orchestrator | 2026-04-13 03:47:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:47:03.145619 | orchestrator | 2026-04-13 03:47:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:47:03.147537 | orchestrator | 2026-04-13 03:47:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:47:03.147717 | orchestrator | 2026-04-13 03:47:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:47:06.191294 | orchestrator | 2026-04-13 03:47:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:47:06.194108 | orchestrator | 2026-04-13 03:47:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:47:06.194141 | orchestrator | 2026-04-13 03:47:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:47:09.243775 | orchestrator | 2026-04-13 03:47:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:47:09.246291 | orchestrator | 2026-04-13 03:47:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:47:09.246374 | orchestrator | 2026-04-13 03:47:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:47:12.294834 | orchestrator | 2026-04-13 03:47:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:47:12.296623 | orchestrator | 2026-04-13 03:47:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:47:12.296814 | orchestrator | 2026-04-13 03:47:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:47:15.343050 | orchestrator | 2026-04-13 03:47:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:47:15.344323 | orchestrator | 2026-04-13 03:47:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:47:15.344408 | orchestrator | 2026-04-13 03:47:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:47:18.392304 | orchestrator | 2026-04-13 03:47:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:47:18.394550 | orchestrator | 2026-04-13 03:47:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:47:18.394631 | orchestrator | 2026-04-13 03:47:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:47:21.448518 | orchestrator | 2026-04-13 03:47:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:47:21.450737 | orchestrator | 2026-04-13 03:47:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:47:21.451467 | orchestrator | 2026-04-13 03:47:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:47:24.494964 | orchestrator | 2026-04-13 03:47:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:47:24.499073 | orchestrator | 2026-04-13 03:47:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:47:24.499264 | orchestrator | 2026-04-13 03:47:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:47:27.549776 | orchestrator | 2026-04-13 03:47:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:47:27.554556 | orchestrator | 2026-04-13 03:47:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:47:27.554647 | orchestrator | 2026-04-13 03:47:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:47:30.600073 | orchestrator | 2026-04-13 03:47:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:47:30.601038 | orchestrator | 2026-04-13 03:47:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:47:30.601073 | orchestrator | 2026-04-13 03:47:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:47:33.650680 | orchestrator | 2026-04-13 03:47:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:47:33.653025 | orchestrator | 2026-04-13 03:47:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:47:33.653080 | orchestrator | 2026-04-13 03:47:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:47:36.707158 | orchestrator | 2026-04-13 03:47:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:47:36.709701 | orchestrator | 2026-04-13 03:47:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:47:36.709754 | orchestrator | 2026-04-13 03:47:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:47:39.764255 | orchestrator | 2026-04-13 03:47:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:47:39.766265 | orchestrator | 2026-04-13 03:47:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:47:39.766363 | orchestrator | 2026-04-13 03:47:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:47:42.810281 | orchestrator | 2026-04-13 03:47:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:47:42.812614 | orchestrator | 2026-04-13 03:47:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:47:42.812666 | orchestrator | 2026-04-13 03:47:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:47:45.862503 | orchestrator | 2026-04-13 03:47:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:47:45.865698 | orchestrator | 2026-04-13 03:47:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:47:45.865746 | orchestrator | 2026-04-13 03:47:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:47:48.917951 | orchestrator | 2026-04-13 03:47:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:47:48.919797 | orchestrator | 2026-04-13 03:47:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:47:48.920174 | orchestrator | 2026-04-13 03:47:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:47:51.969844 | orchestrator | 2026-04-13 03:47:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:47:51.971193 | orchestrator | 2026-04-13 03:47:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:47:51.971210 | orchestrator | 2026-04-13 03:47:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:47:55.022144 | orchestrator | 2026-04-13 03:47:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:47:55.024286 | orchestrator | 2026-04-13 03:47:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:47:55.024331 | orchestrator | 2026-04-13 03:47:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:47:58.072903 | orchestrator | 2026-04-13 03:47:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:47:58.074223 | orchestrator | 2026-04-13 03:47:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:47:58.074474 | orchestrator | 2026-04-13 03:47:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:48:01.117842 | orchestrator | 2026-04-13 03:48:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:48:01.118576 | orchestrator | 2026-04-13 03:48:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:48:01.118992 | orchestrator | 2026-04-13 03:48:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:48:04.167673 | orchestrator | 2026-04-13 03:48:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:48:04.169506 | orchestrator | 2026-04-13 03:48:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:48:04.169577 | orchestrator | 2026-04-13 03:48:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:48:07.222168 | orchestrator | 2026-04-13 03:48:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:48:07.224220 | orchestrator | 2026-04-13 03:48:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:48:07.224520 | orchestrator | 2026-04-13 03:48:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:48:10.271484 | orchestrator | 2026-04-13 03:48:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:48:10.272366 | orchestrator | 2026-04-13 03:48:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:48:10.272444 | orchestrator | 2026-04-13 03:48:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:48:13.315368 | orchestrator | 2026-04-13 03:48:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:48:13.316808 | orchestrator | 2026-04-13 03:48:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:48:13.316843 | orchestrator | 2026-04-13 03:48:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:48:16.366393 | orchestrator | 2026-04-13 03:48:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:48:16.368751 | orchestrator | 2026-04-13 03:48:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:48:16.368841 | orchestrator | 2026-04-13 03:48:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:48:19.415454 | orchestrator | 2026-04-13 03:48:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:48:19.417572 | orchestrator | 2026-04-13 03:48:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:48:19.417638 | orchestrator | 2026-04-13 03:48:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:48:22.473158 | orchestrator | 2026-04-13 03:48:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:48:22.475582 | orchestrator | 2026-04-13 03:48:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:48:22.475642 | orchestrator | 2026-04-13 03:48:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:48:25.522505 | orchestrator | 2026-04-13 03:48:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:48:25.525573 | orchestrator | 2026-04-13 03:48:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:48:25.525624 | orchestrator | 2026-04-13 03:48:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:48:28.582929 | orchestrator | 2026-04-13 03:48:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:48:28.585430 | orchestrator | 2026-04-13 03:48:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:48:28.585490 | orchestrator | 2026-04-13 03:48:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:48:31.633064 | orchestrator | 2026-04-13 03:48:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:48:31.635102 | orchestrator | 2026-04-13 03:48:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:48:31.635173 | orchestrator | 2026-04-13 03:48:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:48:34.681751 | orchestrator | 2026-04-13 03:48:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:48:34.683109 | orchestrator | 2026-04-13 03:48:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:48:34.683128 | orchestrator | 2026-04-13 03:48:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:48:37.729570 | orchestrator | 2026-04-13 03:48:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:48:37.731595 | orchestrator | 2026-04-13 03:48:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:48:37.731644 | orchestrator | 2026-04-13 03:48:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:48:40.786440 | orchestrator | 2026-04-13 03:48:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:48:40.789066 | orchestrator | 2026-04-13 03:48:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:48:40.789139 | orchestrator | 2026-04-13 03:48:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:48:43.846510 | orchestrator | 2026-04-13 03:48:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:48:43.847505 | orchestrator | 2026-04-13 03:48:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:48:43.847543 | orchestrator | 2026-04-13 03:48:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:48:46.898114 | orchestrator | 2026-04-13 03:48:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:48:46.899509 | orchestrator | 2026-04-13 03:48:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:48:46.899583 | orchestrator | 2026-04-13 03:48:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:48:49.952748 | orchestrator | 2026-04-13 03:48:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:48:49.954738 | orchestrator | 2026-04-13 03:48:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:48:49.954839 | orchestrator | 2026-04-13 03:48:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:48:53.017354 | orchestrator | 2026-04-13 03:48:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:48:53.017466 | orchestrator | 2026-04-13 03:48:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:48:53.017489 | orchestrator | 2026-04-13 03:48:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:48:56.066931 | orchestrator | 2026-04-13 03:48:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:48:56.067485 | orchestrator | 2026-04-13 03:48:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:48:56.067511 | orchestrator | 2026-04-13 03:48:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:48:59.114283 | orchestrator | 2026-04-13 03:48:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:48:59.116949 | orchestrator | 2026-04-13 03:48:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:48:59.117030 | orchestrator | 2026-04-13 03:48:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:49:02.166922 | orchestrator | 2026-04-13 03:49:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:49:02.169165 | orchestrator | 2026-04-13 03:49:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:49:02.169224 | orchestrator | 2026-04-13 03:49:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:49:05.218320 | orchestrator | 2026-04-13 03:49:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:49:05.220128 | orchestrator | 2026-04-13 03:49:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:49:05.220169 | orchestrator | 2026-04-13 03:49:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:49:08.270775 | orchestrator | 2026-04-13 03:49:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:49:08.272466 | orchestrator | 2026-04-13 03:49:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:49:08.272684 | orchestrator | 2026-04-13 03:49:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:49:11.315770 | orchestrator | 2026-04-13 03:49:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:49:11.317807 | orchestrator | 2026-04-13 03:49:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:49:11.317860 | orchestrator | 2026-04-13 03:49:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:49:14.367726 | orchestrator | 2026-04-13 03:49:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:49:14.368890 | orchestrator | 2026-04-13 03:49:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:49:14.369156 | orchestrator | 2026-04-13 03:49:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:49:17.417714 | orchestrator | 2026-04-13 03:49:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:49:17.419991 | orchestrator | 2026-04-13 03:49:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:49:17.420026 | orchestrator | 2026-04-13 03:49:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:49:20.467699 | orchestrator | 2026-04-13 03:49:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:49:20.469381 | orchestrator | 2026-04-13 03:49:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:49:20.469421 | orchestrator | 2026-04-13 03:49:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:49:23.519143 | orchestrator | 2026-04-13 03:49:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:49:23.522490 | orchestrator | 2026-04-13 03:49:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:49:23.522639 | orchestrator | 2026-04-13 03:49:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:49:26.566157 | orchestrator | 2026-04-13 03:49:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:49:26.568260 | orchestrator | 2026-04-13 03:49:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:49:26.568328 | orchestrator | 2026-04-13 03:49:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:49:29.620456 | orchestrator | 2026-04-13 03:49:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:49:29.622165 | orchestrator | 2026-04-13 03:49:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:49:29.622253 | orchestrator | 2026-04-13 03:49:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:49:32.667440 | orchestrator | 2026-04-13 03:49:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:49:32.669461 | orchestrator | 2026-04-13 03:49:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:49:32.669513 | orchestrator | 2026-04-13 03:49:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:49:35.719826 | orchestrator | 2026-04-13 03:49:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:49:35.722430 | orchestrator | 2026-04-13 03:49:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:49:35.722509 | orchestrator | 2026-04-13 03:49:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:49:38.769895 | orchestrator | 2026-04-13 03:49:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:49:38.773021 | orchestrator | 2026-04-13 03:49:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:49:38.773122 | orchestrator | 2026-04-13 03:49:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:49:41.817835 | orchestrator | 2026-04-13 03:49:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:49:41.819865 | orchestrator | 2026-04-13 03:49:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:49:41.819947 | orchestrator | 2026-04-13 03:49:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:49:44.861213 | orchestrator | 2026-04-13 03:49:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:49:44.863289 | orchestrator | 2026-04-13 03:49:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:49:44.863359 | orchestrator | 2026-04-13 03:49:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:49:47.913520 | orchestrator | 2026-04-13 03:49:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:49:47.916245 | orchestrator | 2026-04-13 03:49:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:49:47.916293 | orchestrator | 2026-04-13 03:49:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:49:50.965403 | orchestrator | 2026-04-13 03:49:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:49:50.967695 | orchestrator | 2026-04-13 03:49:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:49:50.967809 | orchestrator | 2026-04-13 03:49:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:49:54.022324 | orchestrator | 2026-04-13 03:49:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:49:54.023959 | orchestrator | 2026-04-13 03:49:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:49:54.024119 | orchestrator | 2026-04-13 03:49:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:49:57.075446 | orchestrator | 2026-04-13 03:49:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:49:57.078202 | orchestrator | 2026-04-13 03:49:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:49:57.078300 | orchestrator | 2026-04-13 03:49:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:50:00.123534 | orchestrator | 2026-04-13 03:50:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:50:00.125559 | orchestrator | 2026-04-13 03:50:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:50:00.125656 | orchestrator | 2026-04-13 03:50:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:50:03.174146 | orchestrator | 2026-04-13 03:50:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:50:03.176239 | orchestrator | 2026-04-13 03:50:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:50:03.176291 | orchestrator | 2026-04-13 03:50:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:50:06.221801 | orchestrator | 2026-04-13 03:50:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:50:06.224677 | orchestrator | 2026-04-13 03:50:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:50:06.224739 | orchestrator | 2026-04-13 03:50:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:50:09.277419 | orchestrator | 2026-04-13 03:50:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:50:09.279318 | orchestrator | 2026-04-13 03:50:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:50:09.279366 | orchestrator | 2026-04-13 03:50:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:50:12.330501 | orchestrator | 2026-04-13 03:50:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:50:12.331707 | orchestrator | 2026-04-13 03:50:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:50:12.331924 | orchestrator | 2026-04-13 03:50:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:50:15.389634 | orchestrator | 2026-04-13 03:50:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:50:15.395325 | orchestrator | 2026-04-13 03:50:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:50:15.395386 | orchestrator | 2026-04-13 03:50:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:50:18.442326 | orchestrator | 2026-04-13 03:50:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:50:18.444802 | orchestrator | 2026-04-13 03:50:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:50:18.446289 | orchestrator | 2026-04-13 03:50:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:50:21.499829 | orchestrator | 2026-04-13 03:50:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:50:21.500503 | orchestrator | 2026-04-13 03:50:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:50:21.500538 | orchestrator | 2026-04-13 03:50:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:50:24.556532 | orchestrator | 2026-04-13 03:50:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:50:24.556630 | orchestrator | 2026-04-13 03:50:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:50:24.556642 | orchestrator | 2026-04-13 03:50:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:50:27.602145 | orchestrator | 2026-04-13 03:50:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:50:27.603424 | orchestrator | 2026-04-13 03:50:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:50:27.603560 | orchestrator | 2026-04-13 03:50:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:50:30.647146 | orchestrator | 2026-04-13 03:50:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:50:30.649177 | orchestrator | 2026-04-13 03:50:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:50:30.649263 | orchestrator | 2026-04-13 03:50:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:50:33.707328 | orchestrator | 2026-04-13 03:50:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:50:33.709685 | orchestrator | 2026-04-13 03:50:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:50:33.709769 | orchestrator | 2026-04-13 03:50:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:50:36.749994 | orchestrator | 2026-04-13 03:50:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:50:36.751131 | orchestrator | 2026-04-13 03:50:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:50:36.751177 | orchestrator | 2026-04-13 03:50:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:50:39.802428 | orchestrator | 2026-04-13 03:50:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:50:39.804600 | orchestrator | 2026-04-13 03:50:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:50:39.804745 | orchestrator | 2026-04-13 03:50:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:50:42.852882 | orchestrator | 2026-04-13 03:50:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:50:42.855658 | orchestrator | 2026-04-13 03:50:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:50:42.855732 | orchestrator | 2026-04-13 03:50:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:50:45.899555 | orchestrator | 2026-04-13 03:50:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:50:45.900746 | orchestrator | 2026-04-13 03:50:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:50:45.900991 | orchestrator | 2026-04-13 03:50:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:50:48.954769 | orchestrator | 2026-04-13 03:50:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:50:48.956641 | orchestrator | 2026-04-13 03:50:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:50:48.956898 | orchestrator | 2026-04-13 03:50:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:50:52.003793 | orchestrator | 2026-04-13 03:50:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:50:52.004601 | orchestrator | 2026-04-13 03:50:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:50:52.004700 | orchestrator | 2026-04-13 03:50:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:50:55.064992 | orchestrator | 2026-04-13 03:50:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:50:55.067096 | orchestrator | 2026-04-13 03:50:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:50:55.067631 | orchestrator | 2026-04-13 03:50:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:50:58.119069 | orchestrator | 2026-04-13 03:50:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:50:58.122390 | orchestrator | 2026-04-13 03:50:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:50:58.122463 | orchestrator | 2026-04-13 03:50:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:51:01.177131 | orchestrator | 2026-04-13 03:51:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:51:01.179312 | orchestrator | 2026-04-13 03:51:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:51:01.179383 | orchestrator | 2026-04-13 03:51:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:51:04.224074 | orchestrator | 2026-04-13 03:51:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:51:04.225539 | orchestrator | 2026-04-13 03:51:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:51:04.225626 | orchestrator | 2026-04-13 03:51:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:51:07.266630 | orchestrator | 2026-04-13 03:51:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:51:07.267465 | orchestrator | 2026-04-13 03:51:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:51:07.267529 | orchestrator | 2026-04-13 03:51:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:51:10.320290 | orchestrator | 2026-04-13 03:51:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:51:10.322467 | orchestrator | 2026-04-13 03:51:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:51:10.322566 | orchestrator | 2026-04-13 03:51:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:51:13.377639 | orchestrator | 2026-04-13 03:51:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:51:13.379114 | orchestrator | 2026-04-13 03:51:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:51:13.379223 | orchestrator | 2026-04-13 03:51:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:51:16.428068 | orchestrator | 2026-04-13 03:51:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:51:16.429381 | orchestrator | 2026-04-13 03:51:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:51:16.429446 | orchestrator | 2026-04-13 03:51:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:51:19.477558 | orchestrator | 2026-04-13 03:51:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:51:19.478836 | orchestrator | 2026-04-13 03:51:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:51:19.478892 | orchestrator | 2026-04-13 03:51:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:51:22.524133 | orchestrator | 2026-04-13 03:51:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:51:22.525008 | orchestrator | 2026-04-13 03:51:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:51:22.525040 | orchestrator | 2026-04-13 03:51:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:51:25.570546 | orchestrator | 2026-04-13 03:51:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:51:25.572716 | orchestrator | 2026-04-13 03:51:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:51:25.572766 | orchestrator | 2026-04-13 03:51:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:51:28.623684 | orchestrator | 2026-04-13 03:51:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:51:28.624778 | orchestrator | 2026-04-13 03:51:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:51:28.624948 | orchestrator | 2026-04-13 03:51:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:51:31.676232 | orchestrator | 2026-04-13 03:51:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:51:31.677170 | orchestrator | 2026-04-13 03:51:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:51:31.677215 | orchestrator | 2026-04-13 03:51:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:51:34.730570 | orchestrator | 2026-04-13 03:51:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:51:34.731953 | orchestrator | 2026-04-13 03:51:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:51:34.732084 | orchestrator | 2026-04-13 03:51:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:51:37.780794 | orchestrator | 2026-04-13 03:51:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:51:37.783117 | orchestrator | 2026-04-13 03:51:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:51:37.783202 | orchestrator | 2026-04-13 03:51:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:51:40.832985 | orchestrator | 2026-04-13 03:51:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:51:40.833926 | orchestrator | 2026-04-13 03:51:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:51:40.833977 | orchestrator | 2026-04-13 03:51:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:51:43.877383 | orchestrator | 2026-04-13 03:51:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:51:43.879500 | orchestrator | 2026-04-13 03:51:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:51:43.879572 | orchestrator | 2026-04-13 03:51:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:51:46.924882 | orchestrator | 2026-04-13 03:51:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:51:46.925240 | orchestrator | 2026-04-13 03:51:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:51:46.925271 | orchestrator | 2026-04-13 03:51:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:51:49.979996 | orchestrator | 2026-04-13 03:51:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:51:49.981992 | orchestrator | 2026-04-13 03:51:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:51:49.982143 | orchestrator | 2026-04-13 03:51:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:51:53.027978 | orchestrator | 2026-04-13 03:51:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:51:53.030174 | orchestrator | 2026-04-13 03:51:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:51:53.030248 | orchestrator | 2026-04-13 03:51:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:51:56.084206 | orchestrator | 2026-04-13 03:51:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:51:56.085247 | orchestrator | 2026-04-13 03:51:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:51:56.085290 | orchestrator | 2026-04-13 03:51:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:51:59.139549 | orchestrator | 2026-04-13 03:51:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:51:59.141084 | orchestrator | 2026-04-13 03:51:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:51:59.141144 | orchestrator | 2026-04-13 03:51:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:52:02.188891 | orchestrator | 2026-04-13 03:52:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:52:02.192019 | orchestrator | 2026-04-13 03:52:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:52:02.192100 | orchestrator | 2026-04-13 03:52:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:52:05.242455 | orchestrator | 2026-04-13 03:52:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:52:05.244452 | orchestrator | 2026-04-13 03:52:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:52:05.244602 | orchestrator | 2026-04-13 03:52:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:52:08.289721 | orchestrator | 2026-04-13 03:52:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:52:08.290513 | orchestrator | 2026-04-13 03:52:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:52:08.290568 | orchestrator | 2026-04-13 03:52:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:52:11.335663 | orchestrator | 2026-04-13 03:52:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:52:11.337705 | orchestrator | 2026-04-13 03:52:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:52:11.337734 | orchestrator | 2026-04-13 03:52:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:52:14.383649 | orchestrator | 2026-04-13 03:52:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:52:14.384601 | orchestrator | 2026-04-13 03:52:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:52:14.384737 | orchestrator | 2026-04-13 03:52:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:52:17.431991 | orchestrator | 2026-04-13 03:52:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:52:17.433848 | orchestrator | 2026-04-13 03:52:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:52:17.433909 | orchestrator | 2026-04-13 03:52:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:52:20.484944 | orchestrator | 2026-04-13 03:52:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:52:20.486640 | orchestrator | 2026-04-13 03:52:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:52:20.486692 | orchestrator | 2026-04-13 03:52:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:52:23.536804 | orchestrator | 2026-04-13 03:52:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:52:23.539519 | orchestrator | 2026-04-13 03:52:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:52:23.539606 | orchestrator | 2026-04-13 03:52:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:52:26.596052 | orchestrator | 2026-04-13 03:52:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:52:26.597675 | orchestrator | 2026-04-13 03:52:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:52:26.597719 | orchestrator | 2026-04-13 03:52:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:52:29.651437 | orchestrator | 2026-04-13 03:52:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:52:29.654442 | orchestrator | 2026-04-13 03:52:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:52:29.654509 | orchestrator | 2026-04-13 03:52:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:52:32.702658 | orchestrator | 2026-04-13 03:52:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:52:32.704356 | orchestrator | 2026-04-13 03:52:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:52:32.704394 | orchestrator | 2026-04-13 03:52:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:52:35.756417 | orchestrator | 2026-04-13 03:52:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:52:35.758003 | orchestrator | 2026-04-13 03:52:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:52:35.758109 | orchestrator | 2026-04-13 03:52:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:52:38.807888 | orchestrator | 2026-04-13 03:52:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:52:38.809495 | orchestrator | 2026-04-13 03:52:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:52:38.809564 | orchestrator | 2026-04-13 03:52:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:52:41.854804 | orchestrator | 2026-04-13 03:52:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:52:41.856054 | orchestrator | 2026-04-13 03:52:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:52:41.857462 | orchestrator | 2026-04-13 03:52:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:52:44.906319 | orchestrator | 2026-04-13 03:52:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:52:44.909024 | orchestrator | 2026-04-13 03:52:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:52:44.909070 | orchestrator | 2026-04-13 03:52:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:52:47.960776 | orchestrator | 2026-04-13 03:52:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:52:47.963263 | orchestrator | 2026-04-13 03:52:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:52:47.963352 | orchestrator | 2026-04-13 03:52:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:52:51.015858 | orchestrator | 2026-04-13 03:52:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:52:51.016387 | orchestrator | 2026-04-13 03:52:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:52:51.016528 | orchestrator | 2026-04-13 03:52:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:52:54.062467 | orchestrator | 2026-04-13 03:52:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:52:54.063750 | orchestrator | 2026-04-13 03:52:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:52:54.063794 | orchestrator | 2026-04-13 03:52:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:52:57.118598 | orchestrator | 2026-04-13 03:52:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:52:57.120754 | orchestrator | 2026-04-13 03:52:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:52:57.120882 | orchestrator | 2026-04-13 03:52:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:53:00.164971 | orchestrator | 2026-04-13 03:53:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:53:00.166514 | orchestrator | 2026-04-13 03:53:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:53:00.166581 | orchestrator | 2026-04-13 03:53:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:53:03.212596 | orchestrator | 2026-04-13 03:53:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:53:03.213246 | orchestrator | 2026-04-13 03:53:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:53:03.213289 | orchestrator | 2026-04-13 03:53:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:53:06.258169 | orchestrator | 2026-04-13 03:53:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:53:06.259260 | orchestrator | 2026-04-13 03:53:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:53:06.259281 | orchestrator | 2026-04-13 03:53:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:53:09.305453 | orchestrator | 2026-04-13 03:53:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:53:09.307021 | orchestrator | 2026-04-13 03:53:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:53:09.307083 | orchestrator | 2026-04-13 03:53:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:53:12.364855 | orchestrator | 2026-04-13 03:53:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:53:12.367138 | orchestrator | 2026-04-13 03:53:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:53:12.367242 | orchestrator | 2026-04-13 03:53:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:53:15.423908 | orchestrator | 2026-04-13 03:53:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:53:15.425518 | orchestrator | 2026-04-13 03:53:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:53:15.425575 | orchestrator | 2026-04-13 03:53:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:53:18.475642 | orchestrator | 2026-04-13 03:53:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:53:18.478949 | orchestrator | 2026-04-13 03:53:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:53:18.479067 | orchestrator | 2026-04-13 03:53:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:53:21.530388 | orchestrator | 2026-04-13 03:53:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:53:21.535201 | orchestrator | 2026-04-13 03:53:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:53:21.535287 | orchestrator | 2026-04-13 03:53:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:53:24.586269 | orchestrator | 2026-04-13 03:53:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:53:24.587613 | orchestrator | 2026-04-13 03:53:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:53:24.587635 | orchestrator | 2026-04-13 03:53:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:53:27.636537 | orchestrator | 2026-04-13 03:53:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:53:27.638368 | orchestrator | 2026-04-13 03:53:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:53:27.638497 | orchestrator | 2026-04-13 03:53:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:53:30.693056 | orchestrator | 2026-04-13 03:53:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:53:30.694282 | orchestrator | 2026-04-13 03:53:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:53:30.694347 | orchestrator | 2026-04-13 03:53:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:53:33.742247 | orchestrator | 2026-04-13 03:53:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:53:33.744346 | orchestrator | 2026-04-13 03:53:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:53:33.744393 | orchestrator | 2026-04-13 03:53:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:53:36.788731 | orchestrator | 2026-04-13 03:53:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:53:36.790674 | orchestrator | 2026-04-13 03:53:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:53:36.790725 | orchestrator | 2026-04-13 03:53:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:53:39.840708 | orchestrator | 2026-04-13 03:53:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:53:39.843418 | orchestrator | 2026-04-13 03:53:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:53:39.843504 | orchestrator | 2026-04-13 03:53:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:53:42.892342 | orchestrator | 2026-04-13 03:53:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:53:42.893975 | orchestrator | 2026-04-13 03:53:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:53:42.894103 | orchestrator | 2026-04-13 03:53:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:53:45.946345 | orchestrator | 2026-04-13 03:53:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:53:45.947773 | orchestrator | 2026-04-13 03:53:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:53:45.947836 | orchestrator | 2026-04-13 03:53:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:53:49.003914 | orchestrator | 2026-04-13 03:53:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:53:49.005297 | orchestrator | 2026-04-13 03:53:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:53:49.005414 | orchestrator | 2026-04-13 03:53:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:53:52.052774 | orchestrator | 2026-04-13 03:53:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:53:52.054837 | orchestrator | 2026-04-13 03:53:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:53:52.054933 | orchestrator | 2026-04-13 03:53:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:53:55.106699 | orchestrator | 2026-04-13 03:53:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:53:55.108019 | orchestrator | 2026-04-13 03:53:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:53:55.108061 | orchestrator | 2026-04-13 03:53:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:53:58.162122 | orchestrator | 2026-04-13 03:53:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:53:58.164213 | orchestrator | 2026-04-13 03:53:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:53:58.164843 | orchestrator | 2026-04-13 03:53:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:54:01.213582 | orchestrator | 2026-04-13 03:54:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:54:01.215835 | orchestrator | 2026-04-13 03:54:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:54:01.215908 | orchestrator | 2026-04-13 03:54:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:54:04.270271 | orchestrator | 2026-04-13 03:54:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:54:04.271894 | orchestrator | 2026-04-13 03:54:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:54:04.271973 | orchestrator | 2026-04-13 03:54:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:54:07.326216 | orchestrator | 2026-04-13 03:54:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:54:07.328490 | orchestrator | 2026-04-13 03:54:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:54:07.328562 | orchestrator | 2026-04-13 03:54:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:54:10.382223 | orchestrator | 2026-04-13 03:54:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:54:10.383905 | orchestrator | 2026-04-13 03:54:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:54:10.383958 | orchestrator | 2026-04-13 03:54:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:54:13.435853 | orchestrator | 2026-04-13 03:54:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:54:13.438274 | orchestrator | 2026-04-13 03:54:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:54:13.438825 | orchestrator | 2026-04-13 03:54:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:54:16.487726 | orchestrator | 2026-04-13 03:54:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:54:16.489780 | orchestrator | 2026-04-13 03:54:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:54:16.489830 | orchestrator | 2026-04-13 03:54:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:54:19.534209 | orchestrator | 2026-04-13 03:54:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:54:19.534732 | orchestrator | 2026-04-13 03:54:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:54:19.534779 | orchestrator | 2026-04-13 03:54:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:54:22.577832 | orchestrator | 2026-04-13 03:54:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:54:22.579414 | orchestrator | 2026-04-13 03:54:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:54:22.579466 | orchestrator | 2026-04-13 03:54:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:54:25.628835 | orchestrator | 2026-04-13 03:54:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:54:25.630906 | orchestrator | 2026-04-13 03:54:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:54:25.631059 | orchestrator | 2026-04-13 03:54:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:54:28.672738 | orchestrator | 2026-04-13 03:54:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:54:28.674242 | orchestrator | 2026-04-13 03:54:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:54:28.674308 | orchestrator | 2026-04-13 03:54:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:54:31.723495 | orchestrator | 2026-04-13 03:54:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:54:31.725729 | orchestrator | 2026-04-13 03:54:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:54:31.725789 | orchestrator | 2026-04-13 03:54:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:54:34.780657 | orchestrator | 2026-04-13 03:54:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:54:34.782930 | orchestrator | 2026-04-13 03:54:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:54:34.782971 | orchestrator | 2026-04-13 03:54:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:54:37.834888 | orchestrator | 2026-04-13 03:54:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:54:37.837459 | orchestrator | 2026-04-13 03:54:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:54:37.837558 | orchestrator | 2026-04-13 03:54:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:54:40.879923 | orchestrator | 2026-04-13 03:54:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:54:40.883941 | orchestrator | 2026-04-13 03:54:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:54:40.884048 | orchestrator | 2026-04-13 03:54:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:54:43.935011 | orchestrator | 2026-04-13 03:54:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:54:43.936184 | orchestrator | 2026-04-13 03:54:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:54:43.936243 | orchestrator | 2026-04-13 03:54:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:54:46.984281 | orchestrator | 2026-04-13 03:54:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:54:46.986103 | orchestrator | 2026-04-13 03:54:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:54:46.986162 | orchestrator | 2026-04-13 03:54:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:54:50.037237 | orchestrator | 2026-04-13 03:54:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:54:50.038614 | orchestrator | 2026-04-13 03:54:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:54:50.038678 | orchestrator | 2026-04-13 03:54:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:54:53.082825 | orchestrator | 2026-04-13 03:54:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:54:53.084561 | orchestrator | 2026-04-13 03:54:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:54:53.084615 | orchestrator | 2026-04-13 03:54:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:54:56.129837 | orchestrator | 2026-04-13 03:54:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:54:56.130928 | orchestrator | 2026-04-13 03:54:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:54:56.130981 | orchestrator | 2026-04-13 03:54:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:54:59.181151 | orchestrator | 2026-04-13 03:54:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:54:59.182695 | orchestrator | 2026-04-13 03:54:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:54:59.183069 | orchestrator | 2026-04-13 03:54:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:55:02.228685 | orchestrator | 2026-04-13 03:55:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:55:02.231744 | orchestrator | 2026-04-13 03:55:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:55:02.231811 | orchestrator | 2026-04-13 03:55:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:55:05.279075 | orchestrator | 2026-04-13 03:55:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:55:05.282056 | orchestrator | 2026-04-13 03:55:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:55:05.282101 | orchestrator | 2026-04-13 03:55:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:55:08.331074 | orchestrator | 2026-04-13 03:55:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:55:08.331536 | orchestrator | 2026-04-13 03:55:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:55:08.331570 | orchestrator | 2026-04-13 03:55:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:55:11.375911 | orchestrator | 2026-04-13 03:55:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:57:11.461633 | orchestrator | 2026-04-13 03:57:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:57:11.461723 | orchestrator | 2026-04-13 03:57:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:57:14.505675 | orchestrator | 2026-04-13 03:57:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:57:14.506190 | orchestrator | 2026-04-13 03:57:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:57:14.506530 | orchestrator | 2026-04-13 03:57:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:57:17.552770 | orchestrator | 2026-04-13 03:57:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:57:17.555050 | orchestrator | 2026-04-13 03:57:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:57:17.555091 | orchestrator | 2026-04-13 03:57:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:57:20.596879 | orchestrator | 2026-04-13 03:57:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:57:20.598794 | orchestrator | 2026-04-13 03:57:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:57:20.598870 | orchestrator | 2026-04-13 03:57:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:57:23.641772 | orchestrator | 2026-04-13 03:57:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:57:23.644261 | orchestrator | 2026-04-13 03:57:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:57:23.644299 | orchestrator | 2026-04-13 03:57:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:57:26.684791 | orchestrator | 2026-04-13 03:57:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:57:26.686257 | orchestrator | 2026-04-13 03:57:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:57:26.686361 | orchestrator | 2026-04-13 03:57:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:57:29.731386 | orchestrator | 2026-04-13 03:57:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:57:29.733134 | orchestrator | 2026-04-13 03:57:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:57:29.733195 | orchestrator | 2026-04-13 03:57:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:57:32.780039 | orchestrator | 2026-04-13 03:57:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:57:32.782299 | orchestrator | 2026-04-13 03:57:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:57:32.782407 | orchestrator | 2026-04-13 03:57:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:57:35.828444 | orchestrator | 2026-04-13 03:57:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:57:35.830916 | orchestrator | 2026-04-13 03:57:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:57:35.831021 | orchestrator | 2026-04-13 03:57:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:57:38.873651 | orchestrator | 2026-04-13 03:57:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:57:38.874848 | orchestrator | 2026-04-13 03:57:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:57:38.874888 | orchestrator | 2026-04-13 03:57:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:57:41.919927 | orchestrator | 2026-04-13 03:57:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:57:41.921845 | orchestrator | 2026-04-13 03:57:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:57:41.921897 | orchestrator | 2026-04-13 03:57:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:57:44.966505 | orchestrator | 2026-04-13 03:57:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:57:44.968835 | orchestrator | 2026-04-13 03:57:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:57:44.968898 | orchestrator | 2026-04-13 03:57:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:57:48.018796 | orchestrator | 2026-04-13 03:57:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:57:48.020227 | orchestrator | 2026-04-13 03:57:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:57:48.020289 | orchestrator | 2026-04-13 03:57:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:57:51.054841 | orchestrator | 2026-04-13 03:57:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:57:51.055944 | orchestrator | 2026-04-13 03:57:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:57:51.056099 | orchestrator | 2026-04-13 03:57:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:57:54.098452 | orchestrator | 2026-04-13 03:57:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:57:54.100731 | orchestrator | 2026-04-13 03:57:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:57:54.100814 | orchestrator | 2026-04-13 03:57:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:57:57.142417 | orchestrator | 2026-04-13 03:57:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:57:57.144918 | orchestrator | 2026-04-13 03:57:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:57:57.145036 | orchestrator | 2026-04-13 03:57:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:58:00.201670 | orchestrator | 2026-04-13 03:58:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:58:00.204193 | orchestrator | 2026-04-13 03:58:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:58:00.204264 | orchestrator | 2026-04-13 03:58:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:58:03.252256 | orchestrator | 2026-04-13 03:58:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:58:03.255014 | orchestrator | 2026-04-13 03:58:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:58:03.255065 | orchestrator | 2026-04-13 03:58:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:58:06.294844 | orchestrator | 2026-04-13 03:58:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:58:06.296157 | orchestrator | 2026-04-13 03:58:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:58:06.296214 | orchestrator | 2026-04-13 03:58:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:58:09.338106 | orchestrator | 2026-04-13 03:58:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:58:09.339589 | orchestrator | 2026-04-13 03:58:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:58:09.339621 | orchestrator | 2026-04-13 03:58:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:58:12.391535 | orchestrator | 2026-04-13 03:58:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:58:12.392312 | orchestrator | 2026-04-13 03:58:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:58:12.392329 | orchestrator | 2026-04-13 03:58:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:58:15.448496 | orchestrator | 2026-04-13 03:58:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:58:15.450008 | orchestrator | 2026-04-13 03:58:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:58:15.450200 | orchestrator | 2026-04-13 03:58:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:58:18.501150 | orchestrator | 2026-04-13 03:58:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:58:18.503499 | orchestrator | 2026-04-13 03:58:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:58:18.503588 | orchestrator | 2026-04-13 03:58:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:58:21.540399 | orchestrator | 2026-04-13 03:58:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:58:21.542174 | orchestrator | 2026-04-13 03:58:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:58:21.542632 | orchestrator | 2026-04-13 03:58:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:58:24.581082 | orchestrator | 2026-04-13 03:58:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:58:24.583965 | orchestrator | 2026-04-13 03:58:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:58:24.584060 | orchestrator | 2026-04-13 03:58:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:58:27.624085 | orchestrator | 2026-04-13 03:58:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:58:27.625806 | orchestrator | 2026-04-13 03:58:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:58:27.625852 | orchestrator | 2026-04-13 03:58:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:58:30.676155 | orchestrator | 2026-04-13 03:58:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:58:30.679397 | orchestrator | 2026-04-13 03:58:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:58:30.679474 | orchestrator | 2026-04-13 03:58:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:58:33.729228 | orchestrator | 2026-04-13 03:58:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:58:33.730718 | orchestrator | 2026-04-13 03:58:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:58:33.730761 | orchestrator | 2026-04-13 03:58:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:58:36.778766 | orchestrator | 2026-04-13 03:58:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:58:36.781337 | orchestrator | 2026-04-13 03:58:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:58:36.781399 | orchestrator | 2026-04-13 03:58:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:58:39.832212 | orchestrator | 2026-04-13 03:58:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:58:39.833002 | orchestrator | 2026-04-13 03:58:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:58:39.833053 | orchestrator | 2026-04-13 03:58:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:58:42.871652 | orchestrator | 2026-04-13 03:58:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:58:42.871842 | orchestrator | 2026-04-13 03:58:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:58:42.871863 | orchestrator | 2026-04-13 03:58:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:58:45.911506 | orchestrator | 2026-04-13 03:58:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:58:45.913147 | orchestrator | 2026-04-13 03:58:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:58:45.913311 | orchestrator | 2026-04-13 03:58:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:58:48.967751 | orchestrator | 2026-04-13 03:58:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:58:48.969650 | orchestrator | 2026-04-13 03:58:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:58:48.969691 | orchestrator | 2026-04-13 03:58:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:58:52.032560 | orchestrator | 2026-04-13 03:58:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:58:52.034719 | orchestrator | 2026-04-13 03:58:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:58:52.034777 | orchestrator | 2026-04-13 03:58:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:58:55.074634 | orchestrator | 2026-04-13 03:58:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:58:55.077525 | orchestrator | 2026-04-13 03:58:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:58:55.077614 | orchestrator | 2026-04-13 03:58:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:58:58.122791 | orchestrator | 2026-04-13 03:58:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:58:58.125774 | orchestrator | 2026-04-13 03:58:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:58:58.125845 | orchestrator | 2026-04-13 03:58:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:59:01.170393 | orchestrator | 2026-04-13 03:59:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:59:01.172893 | orchestrator | 2026-04-13 03:59:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:59:01.172992 | orchestrator | 2026-04-13 03:59:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:59:04.223299 | orchestrator | 2026-04-13 03:59:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:59:04.225476 | orchestrator | 2026-04-13 03:59:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:59:04.225539 | orchestrator | 2026-04-13 03:59:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:59:07.271304 | orchestrator | 2026-04-13 03:59:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:59:07.273558 | orchestrator | 2026-04-13 03:59:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:59:07.273607 | orchestrator | 2026-04-13 03:59:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:59:10.319083 | orchestrator | 2026-04-13 03:59:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:59:10.322102 | orchestrator | 2026-04-13 03:59:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:59:10.323176 | orchestrator | 2026-04-13 03:59:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:59:13.372004 | orchestrator | 2026-04-13 03:59:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:59:13.373962 | orchestrator | 2026-04-13 03:59:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:59:13.373984 | orchestrator | 2026-04-13 03:59:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:59:16.415606 | orchestrator | 2026-04-13 03:59:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:59:16.417406 | orchestrator | 2026-04-13 03:59:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:59:16.417455 | orchestrator | 2026-04-13 03:59:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:59:19.471844 | orchestrator | 2026-04-13 03:59:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:59:19.472916 | orchestrator | 2026-04-13 03:59:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:59:19.472997 | orchestrator | 2026-04-13 03:59:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:59:22.522835 | orchestrator | 2026-04-13 03:59:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:59:22.524667 | orchestrator | 2026-04-13 03:59:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:59:22.524721 | orchestrator | 2026-04-13 03:59:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:59:25.575574 | orchestrator | 2026-04-13 03:59:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:59:25.578504 | orchestrator | 2026-04-13 03:59:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:59:25.578583 | orchestrator | 2026-04-13 03:59:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:59:28.624064 | orchestrator | 2026-04-13 03:59:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:59:28.627284 | orchestrator | 2026-04-13 03:59:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:59:28.627368 | orchestrator | 2026-04-13 03:59:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:59:31.674657 | orchestrator | 2026-04-13 03:59:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:59:31.677445 | orchestrator | 2026-04-13 03:59:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:59:31.677501 | orchestrator | 2026-04-13 03:59:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:59:34.725532 | orchestrator | 2026-04-13 03:59:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:59:34.727368 | orchestrator | 2026-04-13 03:59:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:59:34.727461 | orchestrator | 2026-04-13 03:59:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:59:37.774916 | orchestrator | 2026-04-13 03:59:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:59:37.776410 | orchestrator | 2026-04-13 03:59:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:59:37.776430 | orchestrator | 2026-04-13 03:59:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:59:40.824139 | orchestrator | 2026-04-13 03:59:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:59:40.826447 | orchestrator | 2026-04-13 03:59:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:59:40.826620 | orchestrator | 2026-04-13 03:59:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:59:43.872712 | orchestrator | 2026-04-13 03:59:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:59:43.875476 | orchestrator | 2026-04-13 03:59:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:59:43.875587 | orchestrator | 2026-04-13 03:59:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:59:46.920321 | orchestrator | 2026-04-13 03:59:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:59:46.921948 | orchestrator | 2026-04-13 03:59:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:59:46.922003 | orchestrator | 2026-04-13 03:59:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:59:49.974267 | orchestrator | 2026-04-13 03:59:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:59:49.976490 | orchestrator | 2026-04-13 03:59:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:59:49.976530 | orchestrator | 2026-04-13 03:59:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:59:53.016069 | orchestrator | 2026-04-13 03:59:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:59:53.018420 | orchestrator | 2026-04-13 03:59:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:59:53.018539 | orchestrator | 2026-04-13 03:59:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:59:56.066475 | orchestrator | 2026-04-13 03:59:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:59:56.068351 | orchestrator | 2026-04-13 03:59:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:59:56.068403 | orchestrator | 2026-04-13 03:59:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 03:59:59.115486 | orchestrator | 2026-04-13 03:59:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 03:59:59.118377 | orchestrator | 2026-04-13 03:59:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 03:59:59.118465 | orchestrator | 2026-04-13 03:59:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:00:02.159073 | orchestrator | 2026-04-13 04:00:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:00:02.160268 | orchestrator | 2026-04-13 04:00:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:00:02.160295 | orchestrator | 2026-04-13 04:00:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:00:05.208644 | orchestrator | 2026-04-13 04:00:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:00:05.210089 | orchestrator | 2026-04-13 04:00:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:00:05.210336 | orchestrator | 2026-04-13 04:00:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:00:08.256998 | orchestrator | 2026-04-13 04:00:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:00:08.259763 | orchestrator | 2026-04-13 04:00:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:00:08.259878 | orchestrator | 2026-04-13 04:00:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:00:11.309074 | orchestrator | 2026-04-13 04:00:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:00:11.310742 | orchestrator | 2026-04-13 04:00:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:00:11.310813 | orchestrator | 2026-04-13 04:00:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:00:14.350717 | orchestrator | 2026-04-13 04:00:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:00:14.353356 | orchestrator | 2026-04-13 04:00:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:00:14.353444 | orchestrator | 2026-04-13 04:00:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:00:17.396323 | orchestrator | 2026-04-13 04:00:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:00:17.398003 | orchestrator | 2026-04-13 04:00:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:00:17.398166 | orchestrator | 2026-04-13 04:00:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:00:20.454265 | orchestrator | 2026-04-13 04:00:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:00:20.456325 | orchestrator | 2026-04-13 04:00:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:00:20.456381 | orchestrator | 2026-04-13 04:00:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:00:23.497863 | orchestrator | 2026-04-13 04:00:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:00:23.499626 | orchestrator | 2026-04-13 04:00:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:00:23.499704 | orchestrator | 2026-04-13 04:00:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:00:26.539086 | orchestrator | 2026-04-13 04:00:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:00:26.539658 | orchestrator | 2026-04-13 04:00:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:00:26.539687 | orchestrator | 2026-04-13 04:00:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:00:29.585102 | orchestrator | 2026-04-13 04:00:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:00:29.588524 | orchestrator | 2026-04-13 04:00:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:00:29.588692 | orchestrator | 2026-04-13 04:00:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:00:32.636946 | orchestrator | 2026-04-13 04:00:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:00:32.638296 | orchestrator | 2026-04-13 04:00:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:00:32.638380 | orchestrator | 2026-04-13 04:00:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:00:35.690475 | orchestrator | 2026-04-13 04:00:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:00:35.691956 | orchestrator | 2026-04-13 04:00:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:00:35.692014 | orchestrator | 2026-04-13 04:00:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:00:38.738459 | orchestrator | 2026-04-13 04:00:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:00:38.741322 | orchestrator | 2026-04-13 04:00:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:00:38.741449 | orchestrator | 2026-04-13 04:00:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:00:41.790213 | orchestrator | 2026-04-13 04:00:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:00:41.791551 | orchestrator | 2026-04-13 04:00:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:00:41.791609 | orchestrator | 2026-04-13 04:00:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:00:44.840645 | orchestrator | 2026-04-13 04:00:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:00:44.841781 | orchestrator | 2026-04-13 04:00:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:00:44.841873 | orchestrator | 2026-04-13 04:00:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:00:47.891952 | orchestrator | 2026-04-13 04:00:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:00:47.893643 | orchestrator | 2026-04-13 04:00:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:00:47.893699 | orchestrator | 2026-04-13 04:00:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:00:50.937410 | orchestrator | 2026-04-13 04:00:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:00:50.941224 | orchestrator | 2026-04-13 04:00:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:00:50.941472 | orchestrator | 2026-04-13 04:00:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:00:53.984314 | orchestrator | 2026-04-13 04:00:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:00:53.987258 | orchestrator | 2026-04-13 04:00:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:00:53.987412 | orchestrator | 2026-04-13 04:00:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:00:57.035502 | orchestrator | 2026-04-13 04:00:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:00:57.037327 | orchestrator | 2026-04-13 04:00:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:00:57.037405 | orchestrator | 2026-04-13 04:00:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:01:00.081755 | orchestrator | 2026-04-13 04:01:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:01:00.084278 | orchestrator | 2026-04-13 04:01:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:01:00.084353 | orchestrator | 2026-04-13 04:01:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:01:03.128703 | orchestrator | 2026-04-13 04:01:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:01:03.131822 | orchestrator | 2026-04-13 04:01:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:01:03.131909 | orchestrator | 2026-04-13 04:01:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:01:06.173172 | orchestrator | 2026-04-13 04:01:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:01:06.174869 | orchestrator | 2026-04-13 04:01:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:01:06.174951 | orchestrator | 2026-04-13 04:01:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:01:09.222296 | orchestrator | 2026-04-13 04:01:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:01:09.224776 | orchestrator | 2026-04-13 04:01:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:01:09.224862 | orchestrator | 2026-04-13 04:01:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:01:12.272674 | orchestrator | 2026-04-13 04:01:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:01:12.275841 | orchestrator | 2026-04-13 04:01:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:01:12.275911 | orchestrator | 2026-04-13 04:01:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:01:15.318586 | orchestrator | 2026-04-13 04:01:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:01:15.320684 | orchestrator | 2026-04-13 04:01:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:01:15.320715 | orchestrator | 2026-04-13 04:01:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:01:18.368954 | orchestrator | 2026-04-13 04:01:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:01:18.372270 | orchestrator | 2026-04-13 04:01:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:01:18.372325 | orchestrator | 2026-04-13 04:01:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:01:21.419680 | orchestrator | 2026-04-13 04:01:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:01:21.421919 | orchestrator | 2026-04-13 04:01:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:01:21.421998 | orchestrator | 2026-04-13 04:01:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:01:24.466713 | orchestrator | 2026-04-13 04:01:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:01:24.467879 | orchestrator | 2026-04-13 04:01:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:01:24.467945 | orchestrator | 2026-04-13 04:01:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:01:27.513779 | orchestrator | 2026-04-13 04:01:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:01:27.515549 | orchestrator | 2026-04-13 04:01:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:01:27.515649 | orchestrator | 2026-04-13 04:01:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:01:30.560505 | orchestrator | 2026-04-13 04:01:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:01:30.561170 | orchestrator | 2026-04-13 04:01:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:01:30.561218 | orchestrator | 2026-04-13 04:01:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:01:33.610690 | orchestrator | 2026-04-13 04:01:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:01:33.612230 | orchestrator | 2026-04-13 04:01:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:01:33.612298 | orchestrator | 2026-04-13 04:01:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:01:36.664892 | orchestrator | 2026-04-13 04:01:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:01:36.666683 | orchestrator | 2026-04-13 04:01:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:01:36.666725 | orchestrator | 2026-04-13 04:01:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:01:39.707721 | orchestrator | 2026-04-13 04:01:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:01:39.709883 | orchestrator | 2026-04-13 04:01:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:01:39.709953 | orchestrator | 2026-04-13 04:01:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:01:42.766344 | orchestrator | 2026-04-13 04:01:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:01:42.769788 | orchestrator | 2026-04-13 04:01:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:01:42.769830 | orchestrator | 2026-04-13 04:01:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:01:45.810221 | orchestrator | 2026-04-13 04:01:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:01:45.813395 | orchestrator | 2026-04-13 04:01:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:01:45.813463 | orchestrator | 2026-04-13 04:01:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:01:48.856857 | orchestrator | 2026-04-13 04:01:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:01:48.860445 | orchestrator | 2026-04-13 04:01:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:01:48.860503 | orchestrator | 2026-04-13 04:01:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:01:51.911553 | orchestrator | 2026-04-13 04:01:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:01:51.913652 | orchestrator | 2026-04-13 04:01:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:01:51.913758 | orchestrator | 2026-04-13 04:01:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:01:54.959700 | orchestrator | 2026-04-13 04:01:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:01:54.962622 | orchestrator | 2026-04-13 04:01:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:01:54.962702 | orchestrator | 2026-04-13 04:01:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:01:58.010098 | orchestrator | 2026-04-13 04:01:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:01:58.014167 | orchestrator | 2026-04-13 04:01:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:01:58.014295 | orchestrator | 2026-04-13 04:01:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:02:01.053426 | orchestrator | 2026-04-13 04:02:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:02:01.054217 | orchestrator | 2026-04-13 04:02:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:02:01.054270 | orchestrator | 2026-04-13 04:02:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:02:04.099778 | orchestrator | 2026-04-13 04:02:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:02:04.101423 | orchestrator | 2026-04-13 04:02:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:02:04.101477 | orchestrator | 2026-04-13 04:02:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:02:07.138566 | orchestrator | 2026-04-13 04:02:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:02:07.142747 | orchestrator | 2026-04-13 04:02:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:02:07.142819 | orchestrator | 2026-04-13 04:02:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:02:10.184300 | orchestrator | 2026-04-13 04:02:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:02:10.186001 | orchestrator | 2026-04-13 04:02:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:02:10.186087 | orchestrator | 2026-04-13 04:02:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:02:13.233821 | orchestrator | 2026-04-13 04:02:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:02:13.236543 | orchestrator | 2026-04-13 04:02:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:02:13.236700 | orchestrator | 2026-04-13 04:02:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:02:16.275309 | orchestrator | 2026-04-13 04:02:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:02:16.277062 | orchestrator | 2026-04-13 04:02:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:02:16.277142 | orchestrator | 2026-04-13 04:02:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:02:19.320732 | orchestrator | 2026-04-13 04:02:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:02:19.323718 | orchestrator | 2026-04-13 04:02:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:02:19.323822 | orchestrator | 2026-04-13 04:02:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:02:22.369534 | orchestrator | 2026-04-13 04:02:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:02:22.371250 | orchestrator | 2026-04-13 04:02:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:02:22.371297 | orchestrator | 2026-04-13 04:02:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:02:25.415560 | orchestrator | 2026-04-13 04:02:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:02:25.418743 | orchestrator | 2026-04-13 04:02:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:02:25.418809 | orchestrator | 2026-04-13 04:02:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:02:28.460948 | orchestrator | 2026-04-13 04:02:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:02:28.461999 | orchestrator | 2026-04-13 04:02:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:02:28.462122 | orchestrator | 2026-04-13 04:02:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:02:31.504472 | orchestrator | 2026-04-13 04:02:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:02:31.506824 | orchestrator | 2026-04-13 04:02:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:02:31.506891 | orchestrator | 2026-04-13 04:02:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:02:34.552999 | orchestrator | 2026-04-13 04:02:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:02:34.554170 | orchestrator | 2026-04-13 04:02:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:02:34.554220 | orchestrator | 2026-04-13 04:02:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:02:37.608292 | orchestrator | 2026-04-13 04:02:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:02:37.610235 | orchestrator | 2026-04-13 04:02:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:02:37.610294 | orchestrator | 2026-04-13 04:02:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:02:40.653913 | orchestrator | 2026-04-13 04:02:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:02:40.655841 | orchestrator | 2026-04-13 04:02:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:02:40.655907 | orchestrator | 2026-04-13 04:02:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:02:43.692670 | orchestrator | 2026-04-13 04:02:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:02:43.692978 | orchestrator | 2026-04-13 04:02:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:02:43.693127 | orchestrator | 2026-04-13 04:02:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:02:46.738634 | orchestrator | 2026-04-13 04:02:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:02:46.739790 | orchestrator | 2026-04-13 04:02:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:02:46.739831 | orchestrator | 2026-04-13 04:02:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:02:49.787902 | orchestrator | 2026-04-13 04:02:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:02:49.789391 | orchestrator | 2026-04-13 04:02:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:02:49.789483 | orchestrator | 2026-04-13 04:02:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:02:52.837711 | orchestrator | 2026-04-13 04:02:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:02:52.839276 | orchestrator | 2026-04-13 04:02:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:02:52.839339 | orchestrator | 2026-04-13 04:02:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:02:55.879398 | orchestrator | 2026-04-13 04:02:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:02:55.880774 | orchestrator | 2026-04-13 04:02:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:02:55.880817 | orchestrator | 2026-04-13 04:02:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:02:58.922487 | orchestrator | 2026-04-13 04:02:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:02:58.924379 | orchestrator | 2026-04-13 04:02:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:02:58.924406 | orchestrator | 2026-04-13 04:02:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:03:01.969863 | orchestrator | 2026-04-13 04:03:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:03:01.972952 | orchestrator | 2026-04-13 04:03:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:03:01.973457 | orchestrator | 2026-04-13 04:03:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:03:05.019981 | orchestrator | 2026-04-13 04:03:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:03:05.021537 | orchestrator | 2026-04-13 04:03:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:03:05.021614 | orchestrator | 2026-04-13 04:03:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:03:08.076893 | orchestrator | 2026-04-13 04:03:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:03:08.079330 | orchestrator | 2026-04-13 04:03:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:03:08.079530 | orchestrator | 2026-04-13 04:03:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:03:11.119689 | orchestrator | 2026-04-13 04:03:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:03:11.122125 | orchestrator | 2026-04-13 04:03:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:03:11.122479 | orchestrator | 2026-04-13 04:03:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:03:14.171552 | orchestrator | 2026-04-13 04:03:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:03:14.174743 | orchestrator | 2026-04-13 04:03:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:03:14.174852 | orchestrator | 2026-04-13 04:03:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:03:17.219199 | orchestrator | 2026-04-13 04:03:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:03:17.220305 | orchestrator | 2026-04-13 04:03:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:03:17.223063 | orchestrator | 2026-04-13 04:03:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:03:20.266914 | orchestrator | 2026-04-13 04:03:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:03:20.269367 | orchestrator | 2026-04-13 04:03:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:03:20.269436 | orchestrator | 2026-04-13 04:03:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:03:23.317357 | orchestrator | 2026-04-13 04:03:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:03:23.318779 | orchestrator | 2026-04-13 04:03:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:03:23.318834 | orchestrator | 2026-04-13 04:03:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:03:26.354859 | orchestrator | 2026-04-13 04:03:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:03:26.356863 | orchestrator | 2026-04-13 04:03:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:03:26.356935 | orchestrator | 2026-04-13 04:03:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:03:29.405306 | orchestrator | 2026-04-13 04:03:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:03:29.407916 | orchestrator | 2026-04-13 04:03:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:03:29.408014 | orchestrator | 2026-04-13 04:03:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:03:32.449098 | orchestrator | 2026-04-13 04:03:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:03:32.450717 | orchestrator | 2026-04-13 04:03:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:03:32.450849 | orchestrator | 2026-04-13 04:03:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:03:35.491210 | orchestrator | 2026-04-13 04:03:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:03:35.493603 | orchestrator | 2026-04-13 04:03:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:03:35.493672 | orchestrator | 2026-04-13 04:03:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:03:38.542272 | orchestrator | 2026-04-13 04:03:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:03:38.544108 | orchestrator | 2026-04-13 04:03:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:03:38.544129 | orchestrator | 2026-04-13 04:03:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:03:41.578180 | orchestrator | 2026-04-13 04:03:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:03:41.580272 | orchestrator | 2026-04-13 04:03:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:03:41.580335 | orchestrator | 2026-04-13 04:03:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:03:44.621909 | orchestrator | 2026-04-13 04:03:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:03:44.625405 | orchestrator | 2026-04-13 04:03:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:03:44.625462 | orchestrator | 2026-04-13 04:03:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:03:47.675897 | orchestrator | 2026-04-13 04:03:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:03:47.677186 | orchestrator | 2026-04-13 04:03:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:03:47.677225 | orchestrator | 2026-04-13 04:03:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:03:50.729855 | orchestrator | 2026-04-13 04:03:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:03:50.733251 | orchestrator | 2026-04-13 04:03:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:03:50.733656 | orchestrator | 2026-04-13 04:03:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:03:53.794406 | orchestrator | 2026-04-13 04:03:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:03:53.796010 | orchestrator | 2026-04-13 04:03:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:03:53.796324 | orchestrator | 2026-04-13 04:03:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:03:56.842396 | orchestrator | 2026-04-13 04:03:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:03:56.844485 | orchestrator | 2026-04-13 04:03:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:03:56.844540 | orchestrator | 2026-04-13 04:03:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:03:59.882192 | orchestrator | 2026-04-13 04:03:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:03:59.883016 | orchestrator | 2026-04-13 04:03:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:03:59.883055 | orchestrator | 2026-04-13 04:03:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:04:02.928587 | orchestrator | 2026-04-13 04:04:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:04:02.931525 | orchestrator | 2026-04-13 04:04:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:04:02.931596 | orchestrator | 2026-04-13 04:04:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:04:05.981508 | orchestrator | 2026-04-13 04:04:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:04:05.984128 | orchestrator | 2026-04-13 04:04:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:04:05.984195 | orchestrator | 2026-04-13 04:04:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:04:09.034661 | orchestrator | 2026-04-13 04:04:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:04:09.037435 | orchestrator | 2026-04-13 04:04:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:04:09.037530 | orchestrator | 2026-04-13 04:04:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:04:12.077257 | orchestrator | 2026-04-13 04:04:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:04:12.077981 | orchestrator | 2026-04-13 04:04:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:04:12.078297 | orchestrator | 2026-04-13 04:04:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:04:15.125636 | orchestrator | 2026-04-13 04:04:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:04:15.127220 | orchestrator | 2026-04-13 04:04:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:04:15.127281 | orchestrator | 2026-04-13 04:04:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:04:18.174754 | orchestrator | 2026-04-13 04:04:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:04:18.177378 | orchestrator | 2026-04-13 04:04:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:04:18.177466 | orchestrator | 2026-04-13 04:04:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:04:21.228175 | orchestrator | 2026-04-13 04:04:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:04:21.229595 | orchestrator | 2026-04-13 04:04:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:04:21.229644 | orchestrator | 2026-04-13 04:04:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:04:24.279560 | orchestrator | 2026-04-13 04:04:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:04:24.281971 | orchestrator | 2026-04-13 04:04:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:04:24.282077 | orchestrator | 2026-04-13 04:04:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:04:27.325568 | orchestrator | 2026-04-13 04:04:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:04:27.327598 | orchestrator | 2026-04-13 04:04:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:04:27.327645 | orchestrator | 2026-04-13 04:04:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:04:30.380556 | orchestrator | 2026-04-13 04:04:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:04:30.381572 | orchestrator | 2026-04-13 04:04:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:04:30.381613 | orchestrator | 2026-04-13 04:04:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:04:33.428670 | orchestrator | 2026-04-13 04:04:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:04:33.430815 | orchestrator | 2026-04-13 04:04:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:04:33.430881 | orchestrator | 2026-04-13 04:04:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:04:36.479610 | orchestrator | 2026-04-13 04:04:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:04:36.481050 | orchestrator | 2026-04-13 04:04:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:04:36.481135 | orchestrator | 2026-04-13 04:04:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:04:39.532538 | orchestrator | 2026-04-13 04:04:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:04:39.537482 | orchestrator | 2026-04-13 04:04:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:04:39.537535 | orchestrator | 2026-04-13 04:04:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:04:42.584659 | orchestrator | 2026-04-13 04:04:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:04:42.586509 | orchestrator | 2026-04-13 04:04:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:04:42.586574 | orchestrator | 2026-04-13 04:04:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:04:45.630470 | orchestrator | 2026-04-13 04:04:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:04:45.631860 | orchestrator | 2026-04-13 04:04:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:04:45.631948 | orchestrator | 2026-04-13 04:04:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:04:48.679858 | orchestrator | 2026-04-13 04:04:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:04:48.681764 | orchestrator | 2026-04-13 04:04:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:04:48.681867 | orchestrator | 2026-04-13 04:04:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:04:51.733753 | orchestrator | 2026-04-13 04:04:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:04:51.735391 | orchestrator | 2026-04-13 04:04:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:04:51.735443 | orchestrator | 2026-04-13 04:04:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:04:54.790263 | orchestrator | 2026-04-13 04:04:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:04:54.792077 | orchestrator | 2026-04-13 04:04:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:04:54.792172 | orchestrator | 2026-04-13 04:04:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:04:57.845635 | orchestrator | 2026-04-13 04:04:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:04:57.847138 | orchestrator | 2026-04-13 04:04:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:04:57.847181 | orchestrator | 2026-04-13 04:04:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:05:00.897937 | orchestrator | 2026-04-13 04:05:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:05:00.898962 | orchestrator | 2026-04-13 04:05:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:05:00.898992 | orchestrator | 2026-04-13 04:05:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:05:03.950971 | orchestrator | 2026-04-13 04:05:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:05:03.953250 | orchestrator | 2026-04-13 04:05:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:05:03.953306 | orchestrator | 2026-04-13 04:05:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:05:07.003402 | orchestrator | 2026-04-13 04:05:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:05:07.005485 | orchestrator | 2026-04-13 04:05:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:05:07.005660 | orchestrator | 2026-04-13 04:05:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:05:10.049197 | orchestrator | 2026-04-13 04:05:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:05:10.051054 | orchestrator | 2026-04-13 04:05:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:05:10.051086 | orchestrator | 2026-04-13 04:05:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:05:13.090761 | orchestrator | 2026-04-13 04:05:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:05:13.090848 | orchestrator | 2026-04-13 04:05:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:05:13.090858 | orchestrator | 2026-04-13 04:05:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:05:16.127607 | orchestrator | 2026-04-13 04:05:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:05:16.129285 | orchestrator | 2026-04-13 04:05:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:05:16.129340 | orchestrator | 2026-04-13 04:05:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:05:19.180682 | orchestrator | 2026-04-13 04:05:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:05:19.183203 | orchestrator | 2026-04-13 04:05:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:05:19.183251 | orchestrator | 2026-04-13 04:05:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:05:22.236686 | orchestrator | 2026-04-13 04:05:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:05:22.239692 | orchestrator | 2026-04-13 04:05:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:05:22.239750 | orchestrator | 2026-04-13 04:05:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:05:25.292460 | orchestrator | 2026-04-13 04:05:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:05:25.294092 | orchestrator | 2026-04-13 04:05:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:05:25.294154 | orchestrator | 2026-04-13 04:05:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:05:28.339435 | orchestrator | 2026-04-13 04:05:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:05:28.341075 | orchestrator | 2026-04-13 04:05:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:05:28.341113 | orchestrator | 2026-04-13 04:05:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:05:31.390295 | orchestrator | 2026-04-13 04:05:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:05:31.391611 | orchestrator | 2026-04-13 04:05:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:05:31.391657 | orchestrator | 2026-04-13 04:05:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:05:34.442424 | orchestrator | 2026-04-13 04:05:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:05:34.444530 | orchestrator | 2026-04-13 04:05:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:05:34.444617 | orchestrator | 2026-04-13 04:05:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:05:37.500904 | orchestrator | 2026-04-13 04:05:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:05:37.503031 | orchestrator | 2026-04-13 04:05:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:05:37.503096 | orchestrator | 2026-04-13 04:05:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:05:40.546580 | orchestrator | 2026-04-13 04:05:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:05:40.548038 | orchestrator | 2026-04-13 04:05:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:05:40.548409 | orchestrator | 2026-04-13 04:05:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:05:43.597275 | orchestrator | 2026-04-13 04:05:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:05:43.598788 | orchestrator | 2026-04-13 04:05:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:05:43.598831 | orchestrator | 2026-04-13 04:05:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:05:46.644970 | orchestrator | 2026-04-13 04:05:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:05:46.646145 | orchestrator | 2026-04-13 04:05:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:05:46.646222 | orchestrator | 2026-04-13 04:05:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:05:49.699421 | orchestrator | 2026-04-13 04:05:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:05:49.700258 | orchestrator | 2026-04-13 04:05:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:05:49.700291 | orchestrator | 2026-04-13 04:05:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:05:52.752123 | orchestrator | 2026-04-13 04:05:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:05:52.754799 | orchestrator | 2026-04-13 04:05:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:05:52.754894 | orchestrator | 2026-04-13 04:05:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:05:55.801949 | orchestrator | 2026-04-13 04:05:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:05:55.803993 | orchestrator | 2026-04-13 04:05:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:05:55.804070 | orchestrator | 2026-04-13 04:05:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:05:58.849284 | orchestrator | 2026-04-13 04:05:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:05:58.850361 | orchestrator | 2026-04-13 04:05:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:05:58.850501 | orchestrator | 2026-04-13 04:05:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:06:01.902144 | orchestrator | 2026-04-13 04:06:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:06:01.903652 | orchestrator | 2026-04-13 04:06:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:06:01.903784 | orchestrator | 2026-04-13 04:06:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:06:04.950309 | orchestrator | 2026-04-13 04:06:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:06:04.953429 | orchestrator | 2026-04-13 04:06:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:06:04.953558 | orchestrator | 2026-04-13 04:06:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:06:08.003715 | orchestrator | 2026-04-13 04:06:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:06:08.005678 | orchestrator | 2026-04-13 04:06:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:06:08.005731 | orchestrator | 2026-04-13 04:06:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:06:11.051307 | orchestrator | 2026-04-13 04:06:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:06:11.053973 | orchestrator | 2026-04-13 04:06:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:06:11.054084 | orchestrator | 2026-04-13 04:06:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:06:14.100514 | orchestrator | 2026-04-13 04:06:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:06:14.102799 | orchestrator | 2026-04-13 04:06:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:06:14.102889 | orchestrator | 2026-04-13 04:06:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:06:17.155388 | orchestrator | 2026-04-13 04:06:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:06:17.158337 | orchestrator | 2026-04-13 04:06:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:06:17.158426 | orchestrator | 2026-04-13 04:06:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:06:20.199647 | orchestrator | 2026-04-13 04:06:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:06:20.202174 | orchestrator | 2026-04-13 04:06:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:06:20.202366 | orchestrator | 2026-04-13 04:06:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:06:23.253015 | orchestrator | 2026-04-13 04:06:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:06:23.254562 | orchestrator | 2026-04-13 04:06:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:06:23.254906 | orchestrator | 2026-04-13 04:06:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:06:26.305589 | orchestrator | 2026-04-13 04:06:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:06:26.308130 | orchestrator | 2026-04-13 04:06:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:06:26.308230 | orchestrator | 2026-04-13 04:06:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:06:29.360036 | orchestrator | 2026-04-13 04:06:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:06:29.361840 | orchestrator | 2026-04-13 04:06:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:06:29.361890 | orchestrator | 2026-04-13 04:06:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:06:32.405581 | orchestrator | 2026-04-13 04:06:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:06:32.408479 | orchestrator | 2026-04-13 04:06:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:06:32.408563 | orchestrator | 2026-04-13 04:06:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:06:35.449307 | orchestrator | 2026-04-13 04:06:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:06:35.451439 | orchestrator | 2026-04-13 04:06:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:06:35.451515 | orchestrator | 2026-04-13 04:06:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:06:38.494338 | orchestrator | 2026-04-13 04:06:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:06:38.495361 | orchestrator | 2026-04-13 04:06:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:06:38.495389 | orchestrator | 2026-04-13 04:06:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:06:41.547885 | orchestrator | 2026-04-13 04:06:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:06:41.549962 | orchestrator | 2026-04-13 04:06:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:06:41.550091 | orchestrator | 2026-04-13 04:06:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:06:44.595711 | orchestrator | 2026-04-13 04:06:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:06:44.597664 | orchestrator | 2026-04-13 04:06:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:06:44.597957 | orchestrator | 2026-04-13 04:06:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:06:47.646072 | orchestrator | 2026-04-13 04:06:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:06:47.648408 | orchestrator | 2026-04-13 04:06:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:06:47.648461 | orchestrator | 2026-04-13 04:06:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:06:50.698927 | orchestrator | 2026-04-13 04:06:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:06:50.700943 | orchestrator | 2026-04-13 04:06:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:06:50.701068 | orchestrator | 2026-04-13 04:06:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:06:53.746280 | orchestrator | 2026-04-13 04:06:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:06:53.748478 | orchestrator | 2026-04-13 04:06:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:06:53.748562 | orchestrator | 2026-04-13 04:06:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:06:56.796522 | orchestrator | 2026-04-13 04:06:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:06:56.797175 | orchestrator | 2026-04-13 04:06:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:06:56.797209 | orchestrator | 2026-04-13 04:06:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:06:59.841611 | orchestrator | 2026-04-13 04:06:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:06:59.844175 | orchestrator | 2026-04-13 04:06:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:06:59.844261 | orchestrator | 2026-04-13 04:06:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:07:02.891380 | orchestrator | 2026-04-13 04:07:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:07:02.892656 | orchestrator | 2026-04-13 04:07:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:07:02.892707 | orchestrator | 2026-04-13 04:07:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:07:05.941939 | orchestrator | 2026-04-13 04:07:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:07:05.942959 | orchestrator | 2026-04-13 04:07:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:07:05.943013 | orchestrator | 2026-04-13 04:07:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:07:08.987861 | orchestrator | 2026-04-13 04:07:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:07:08.988798 | orchestrator | 2026-04-13 04:07:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:07:08.988836 | orchestrator | 2026-04-13 04:07:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:07:12.041490 | orchestrator | 2026-04-13 04:07:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:07:12.043467 | orchestrator | 2026-04-13 04:07:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:07:12.043517 | orchestrator | 2026-04-13 04:07:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:07:15.095295 | orchestrator | 2026-04-13 04:07:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:07:15.097410 | orchestrator | 2026-04-13 04:07:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:07:15.097520 | orchestrator | 2026-04-13 04:07:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:07:18.145083 | orchestrator | 2026-04-13 04:07:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:07:18.147152 | orchestrator | 2026-04-13 04:07:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:07:18.147595 | orchestrator | 2026-04-13 04:07:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:07:21.203180 | orchestrator | 2026-04-13 04:07:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:07:21.210148 | orchestrator | 2026-04-13 04:07:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:07:21.210223 | orchestrator | 2026-04-13 04:07:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:07:24.256168 | orchestrator | 2026-04-13 04:07:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:07:24.258173 | orchestrator | 2026-04-13 04:07:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:07:24.258233 | orchestrator | 2026-04-13 04:07:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:07:27.305168 | orchestrator | 2026-04-13 04:07:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:07:27.306573 | orchestrator | 2026-04-13 04:07:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:07:27.306619 | orchestrator | 2026-04-13 04:07:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:07:30.356707 | orchestrator | 2026-04-13 04:07:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:07:30.359198 | orchestrator | 2026-04-13 04:07:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:07:30.359252 | orchestrator | 2026-04-13 04:07:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:07:33.404922 | orchestrator | 2026-04-13 04:07:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:07:33.407768 | orchestrator | 2026-04-13 04:07:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:07:33.407811 | orchestrator | 2026-04-13 04:07:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:07:36.453086 | orchestrator | 2026-04-13 04:07:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:07:36.456314 | orchestrator | 2026-04-13 04:07:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:07:36.456399 | orchestrator | 2026-04-13 04:07:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:07:39.498579 | orchestrator | 2026-04-13 04:07:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:07:39.501015 | orchestrator | 2026-04-13 04:07:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:07:39.501072 | orchestrator | 2026-04-13 04:07:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:07:42.548925 | orchestrator | 2026-04-13 04:07:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:07:42.550234 | orchestrator | 2026-04-13 04:07:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:07:42.550293 | orchestrator | 2026-04-13 04:07:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:07:45.598650 | orchestrator | 2026-04-13 04:07:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:07:45.601759 | orchestrator | 2026-04-13 04:07:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:07:45.601851 | orchestrator | 2026-04-13 04:07:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:07:48.658072 | orchestrator | 2026-04-13 04:07:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:07:48.660760 | orchestrator | 2026-04-13 04:07:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:07:48.660811 | orchestrator | 2026-04-13 04:07:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:07:51.705791 | orchestrator | 2026-04-13 04:07:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:07:51.707621 | orchestrator | 2026-04-13 04:07:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:07:51.707666 | orchestrator | 2026-04-13 04:07:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:07:54.761027 | orchestrator | 2026-04-13 04:07:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:07:54.763104 | orchestrator | 2026-04-13 04:07:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:07:54.763175 | orchestrator | 2026-04-13 04:07:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:07:57.816879 | orchestrator | 2026-04-13 04:07:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:07:57.818324 | orchestrator | 2026-04-13 04:07:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:07:57.818498 | orchestrator | 2026-04-13 04:07:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:08:00.866374 | orchestrator | 2026-04-13 04:08:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:08:00.868556 | orchestrator | 2026-04-13 04:08:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:08:00.868619 | orchestrator | 2026-04-13 04:08:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:08:03.918381 | orchestrator | 2026-04-13 04:08:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:08:03.920009 | orchestrator | 2026-04-13 04:08:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:08:03.920060 | orchestrator | 2026-04-13 04:08:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:08:06.977917 | orchestrator | 2026-04-13 04:08:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:08:06.979187 | orchestrator | 2026-04-13 04:08:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:08:06.979241 | orchestrator | 2026-04-13 04:08:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:08:10.029223 | orchestrator | 2026-04-13 04:08:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:08:10.030204 | orchestrator | 2026-04-13 04:08:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:08:10.030260 | orchestrator | 2026-04-13 04:08:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:08:13.083432 | orchestrator | 2026-04-13 04:08:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:08:13.085582 | orchestrator | 2026-04-13 04:08:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:08:13.085663 | orchestrator | 2026-04-13 04:08:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:08:16.138142 | orchestrator | 2026-04-13 04:08:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:08:16.139580 | orchestrator | 2026-04-13 04:08:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:08:16.139632 | orchestrator | 2026-04-13 04:08:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:08:19.182212 | orchestrator | 2026-04-13 04:08:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:08:19.185822 | orchestrator | 2026-04-13 04:08:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:08:19.187106 | orchestrator | 2026-04-13 04:08:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:08:22.231143 | orchestrator | 2026-04-13 04:08:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:08:22.232586 | orchestrator | 2026-04-13 04:08:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:08:22.232645 | orchestrator | 2026-04-13 04:08:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:08:25.281066 | orchestrator | 2026-04-13 04:08:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:08:25.282951 | orchestrator | 2026-04-13 04:08:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:08:25.282997 | orchestrator | 2026-04-13 04:08:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:08:28.339333 | orchestrator | 2026-04-13 04:08:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:08:28.342634 | orchestrator | 2026-04-13 04:08:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:08:28.342749 | orchestrator | 2026-04-13 04:08:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:08:31.381156 | orchestrator | 2026-04-13 04:08:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:08:31.382860 | orchestrator | 2026-04-13 04:08:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:08:31.382911 | orchestrator | 2026-04-13 04:08:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:08:34.428404 | orchestrator | 2026-04-13 04:08:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:08:34.429147 | orchestrator | 2026-04-13 04:08:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:08:34.429175 | orchestrator | 2026-04-13 04:08:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:08:37.482836 | orchestrator | 2026-04-13 04:08:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:08:37.484462 | orchestrator | 2026-04-13 04:08:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:08:37.484528 | orchestrator | 2026-04-13 04:08:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:08:40.531890 | orchestrator | 2026-04-13 04:08:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:08:40.533903 | orchestrator | 2026-04-13 04:08:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:08:40.533986 | orchestrator | 2026-04-13 04:08:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:08:43.585245 | orchestrator | 2026-04-13 04:08:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:08:43.589779 | orchestrator | 2026-04-13 04:08:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:08:43.589918 | orchestrator | 2026-04-13 04:08:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:08:46.635489 | orchestrator | 2026-04-13 04:08:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:08:46.637301 | orchestrator | 2026-04-13 04:08:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:08:46.637342 | orchestrator | 2026-04-13 04:08:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:08:49.686184 | orchestrator | 2026-04-13 04:08:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:08:49.688041 | orchestrator | 2026-04-13 04:08:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:08:49.688086 | orchestrator | 2026-04-13 04:08:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:08:52.739964 | orchestrator | 2026-04-13 04:08:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:08:52.743711 | orchestrator | 2026-04-13 04:08:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:08:52.743756 | orchestrator | 2026-04-13 04:08:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:08:55.796102 | orchestrator | 2026-04-13 04:08:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:08:55.797661 | orchestrator | 2026-04-13 04:08:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:08:55.797807 | orchestrator | 2026-04-13 04:08:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:08:58.851451 | orchestrator | 2026-04-13 04:08:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:08:58.853585 | orchestrator | 2026-04-13 04:08:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:08:58.853709 | orchestrator | 2026-04-13 04:08:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:09:01.905981 | orchestrator | 2026-04-13 04:09:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:09:01.907118 | orchestrator | 2026-04-13 04:09:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:09:01.907168 | orchestrator | 2026-04-13 04:09:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:09:04.955819 | orchestrator | 2026-04-13 04:09:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:09:04.958261 | orchestrator | 2026-04-13 04:09:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:09:04.958316 | orchestrator | 2026-04-13 04:09:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:09:08.004421 | orchestrator | 2026-04-13 04:09:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:09:08.007306 | orchestrator | 2026-04-13 04:09:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:09:08.007368 | orchestrator | 2026-04-13 04:09:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:09:11.049885 | orchestrator | 2026-04-13 04:09:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:09:11.051976 | orchestrator | 2026-04-13 04:09:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:09:11.052029 | orchestrator | 2026-04-13 04:09:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:09:14.103529 | orchestrator | 2026-04-13 04:09:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:09:14.104707 | orchestrator | 2026-04-13 04:09:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:09:14.105947 | orchestrator | 2026-04-13 04:09:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:09:17.156955 | orchestrator | 2026-04-13 04:09:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:09:17.158503 | orchestrator | 2026-04-13 04:09:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:09:17.158555 | orchestrator | 2026-04-13 04:09:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:09:20.203262 | orchestrator | 2026-04-13 04:09:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:09:20.204808 | orchestrator | 2026-04-13 04:09:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:09:20.204865 | orchestrator | 2026-04-13 04:09:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:09:23.254496 | orchestrator | 2026-04-13 04:09:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:09:23.256311 | orchestrator | 2026-04-13 04:09:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:09:23.256430 | orchestrator | 2026-04-13 04:09:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:09:26.305226 | orchestrator | 2026-04-13 04:09:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:09:26.307567 | orchestrator | 2026-04-13 04:09:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:09:26.307677 | orchestrator | 2026-04-13 04:09:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:09:29.349751 | orchestrator | 2026-04-13 04:09:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:09:29.350838 | orchestrator | 2026-04-13 04:09:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:09:29.350884 | orchestrator | 2026-04-13 04:09:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:09:32.392362 | orchestrator | 2026-04-13 04:09:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:09:32.394225 | orchestrator | 2026-04-13 04:09:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:09:32.394279 | orchestrator | 2026-04-13 04:09:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:09:35.448939 | orchestrator | 2026-04-13 04:09:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:09:35.450443 | orchestrator | 2026-04-13 04:09:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:09:35.450498 | orchestrator | 2026-04-13 04:09:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:09:38.493127 | orchestrator | 2026-04-13 04:09:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:09:38.495134 | orchestrator | 2026-04-13 04:09:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:09:38.495203 | orchestrator | 2026-04-13 04:09:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:09:41.528741 | orchestrator | 2026-04-13 04:09:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:09:41.529315 | orchestrator | 2026-04-13 04:09:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:09:41.529348 | orchestrator | 2026-04-13 04:09:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:09:44.572455 | orchestrator | 2026-04-13 04:09:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:09:44.575132 | orchestrator | 2026-04-13 04:09:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:09:44.575187 | orchestrator | 2026-04-13 04:09:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:09:47.625355 | orchestrator | 2026-04-13 04:09:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:09:47.627132 | orchestrator | 2026-04-13 04:09:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:09:47.627196 | orchestrator | 2026-04-13 04:09:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:09:50.678876 | orchestrator | 2026-04-13 04:09:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:09:50.681768 | orchestrator | 2026-04-13 04:09:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:09:50.681836 | orchestrator | 2026-04-13 04:09:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:09:53.731042 | orchestrator | 2026-04-13 04:09:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:09:53.733568 | orchestrator | 2026-04-13 04:09:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:09:53.733670 | orchestrator | 2026-04-13 04:09:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:09:56.780422 | orchestrator | 2026-04-13 04:09:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:09:56.780940 | orchestrator | 2026-04-13 04:09:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:09:56.780999 | orchestrator | 2026-04-13 04:09:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:09:59.832896 | orchestrator | 2026-04-13 04:09:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:09:59.835414 | orchestrator | 2026-04-13 04:09:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:09:59.835461 | orchestrator | 2026-04-13 04:09:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:10:02.884146 | orchestrator | 2026-04-13 04:10:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:10:02.886248 | orchestrator | 2026-04-13 04:10:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:10:02.886369 | orchestrator | 2026-04-13 04:10:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:10:05.935844 | orchestrator | 2026-04-13 04:10:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:10:05.938500 | orchestrator | 2026-04-13 04:10:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:10:05.938567 | orchestrator | 2026-04-13 04:10:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:10:08.978195 | orchestrator | 2026-04-13 04:10:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:10:08.979481 | orchestrator | 2026-04-13 04:10:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:10:08.979699 | orchestrator | 2026-04-13 04:10:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:10:12.032448 | orchestrator | 2026-04-13 04:10:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:10:12.033903 | orchestrator | 2026-04-13 04:10:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:10:12.033961 | orchestrator | 2026-04-13 04:10:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:10:15.085196 | orchestrator | 2026-04-13 04:10:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:10:15.088456 | orchestrator | 2026-04-13 04:10:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:10:15.088552 | orchestrator | 2026-04-13 04:10:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:10:18.142327 | orchestrator | 2026-04-13 04:10:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:10:18.145873 | orchestrator | 2026-04-13 04:10:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:10:18.145954 | orchestrator | 2026-04-13 04:10:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:10:21.197440 | orchestrator | 2026-04-13 04:10:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:10:21.198949 | orchestrator | 2026-04-13 04:10:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:10:21.198995 | orchestrator | 2026-04-13 04:10:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:10:24.251876 | orchestrator | 2026-04-13 04:10:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:10:24.254812 | orchestrator | 2026-04-13 04:10:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:10:24.254873 | orchestrator | 2026-04-13 04:10:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:10:27.303304 | orchestrator | 2026-04-13 04:10:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:10:27.306077 | orchestrator | 2026-04-13 04:10:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:10:27.306158 | orchestrator | 2026-04-13 04:10:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:10:30.361138 | orchestrator | 2026-04-13 04:10:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:10:30.362869 | orchestrator | 2026-04-13 04:10:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:10:30.362920 | orchestrator | 2026-04-13 04:10:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:10:33.416067 | orchestrator | 2026-04-13 04:10:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:10:33.417994 | orchestrator | 2026-04-13 04:10:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:10:33.418108 | orchestrator | 2026-04-13 04:10:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:10:36.467765 | orchestrator | 2026-04-13 04:10:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:10:36.470367 | orchestrator | 2026-04-13 04:10:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:10:36.470432 | orchestrator | 2026-04-13 04:10:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:10:39.517941 | orchestrator | 2026-04-13 04:10:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:10:39.519849 | orchestrator | 2026-04-13 04:10:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:10:39.519917 | orchestrator | 2026-04-13 04:10:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:10:42.567941 | orchestrator | 2026-04-13 04:10:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:10:42.570479 | orchestrator | 2026-04-13 04:10:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:10:42.570575 | orchestrator | 2026-04-13 04:10:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:10:45.619601 | orchestrator | 2026-04-13 04:10:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:10:45.620963 | orchestrator | 2026-04-13 04:10:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:10:45.620996 | orchestrator | 2026-04-13 04:10:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:10:48.661055 | orchestrator | 2026-04-13 04:10:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:10:48.663747 | orchestrator | 2026-04-13 04:10:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:10:48.663830 | orchestrator | 2026-04-13 04:10:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:10:51.716037 | orchestrator | 2026-04-13 04:10:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:10:51.717404 | orchestrator | 2026-04-13 04:10:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:10:51.717957 | orchestrator | 2026-04-13 04:10:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:10:54.764183 | orchestrator | 2026-04-13 04:10:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:10:54.766669 | orchestrator | 2026-04-13 04:10:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:10:54.766737 | orchestrator | 2026-04-13 04:10:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:10:57.801322 | orchestrator | 2026-04-13 04:10:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:10:57.805163 | orchestrator | 2026-04-13 04:10:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:10:57.805200 | orchestrator | 2026-04-13 04:10:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:11:00.849758 | orchestrator | 2026-04-13 04:11:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:11:00.850809 | orchestrator | 2026-04-13 04:11:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:11:00.850828 | orchestrator | 2026-04-13 04:11:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:11:03.898412 | orchestrator | 2026-04-13 04:11:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:11:03.901260 | orchestrator | 2026-04-13 04:11:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:11:03.902727 | orchestrator | 2026-04-13 04:11:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:11:06.955039 | orchestrator | 2026-04-13 04:11:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:11:06.959220 | orchestrator | 2026-04-13 04:11:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:11:06.959330 | orchestrator | 2026-04-13 04:11:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:11:10.015624 | orchestrator | 2026-04-13 04:11:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:11:10.017362 | orchestrator | 2026-04-13 04:11:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:11:10.017389 | orchestrator | 2026-04-13 04:11:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:11:13.069709 | orchestrator | 2026-04-13 04:11:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:11:13.071794 | orchestrator | 2026-04-13 04:11:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:11:13.071866 | orchestrator | 2026-04-13 04:11:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:11:16.119654 | orchestrator | 2026-04-13 04:11:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:11:16.121298 | orchestrator | 2026-04-13 04:11:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:11:16.121339 | orchestrator | 2026-04-13 04:11:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:11:19.170206 | orchestrator | 2026-04-13 04:11:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:11:19.172356 | orchestrator | 2026-04-13 04:11:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:11:19.172724 | orchestrator | 2026-04-13 04:11:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:11:22.217940 | orchestrator | 2026-04-13 04:11:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:11:22.221028 | orchestrator | 2026-04-13 04:11:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:11:22.221083 | orchestrator | 2026-04-13 04:11:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:11:25.269568 | orchestrator | 2026-04-13 04:11:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:11:25.270083 | orchestrator | 2026-04-13 04:11:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:11:25.270114 | orchestrator | 2026-04-13 04:11:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:11:28.318312 | orchestrator | 2026-04-13 04:11:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:11:28.319155 | orchestrator | 2026-04-13 04:11:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:11:28.319189 | orchestrator | 2026-04-13 04:11:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:11:31.371662 | orchestrator | 2026-04-13 04:11:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:11:31.375679 | orchestrator | 2026-04-13 04:11:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:11:31.375697 | orchestrator | 2026-04-13 04:11:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:11:34.426295 | orchestrator | 2026-04-13 04:11:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:11:34.429135 | orchestrator | 2026-04-13 04:11:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:11:34.429254 | orchestrator | 2026-04-13 04:11:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:11:37.488370 | orchestrator | 2026-04-13 04:11:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:11:37.489798 | orchestrator | 2026-04-13 04:11:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:11:37.489846 | orchestrator | 2026-04-13 04:11:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:11:40.544681 | orchestrator | 2026-04-13 04:11:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:11:40.546124 | orchestrator | 2026-04-13 04:11:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:11:40.546264 | orchestrator | 2026-04-13 04:11:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:11:43.590693 | orchestrator | 2026-04-13 04:11:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:11:43.592563 | orchestrator | 2026-04-13 04:11:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:11:43.592626 | orchestrator | 2026-04-13 04:11:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:11:46.642192 | orchestrator | 2026-04-13 04:11:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:11:46.644421 | orchestrator | 2026-04-13 04:11:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:11:46.644521 | orchestrator | 2026-04-13 04:11:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:11:49.689708 | orchestrator | 2026-04-13 04:11:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:11:49.690630 | orchestrator | 2026-04-13 04:11:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:11:49.690726 | orchestrator | 2026-04-13 04:11:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:11:52.736099 | orchestrator | 2026-04-13 04:11:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:11:52.737587 | orchestrator | 2026-04-13 04:11:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:11:52.738008 | orchestrator | 2026-04-13 04:11:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:11:55.787071 | orchestrator | 2026-04-13 04:11:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:11:55.788755 | orchestrator | 2026-04-13 04:11:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:11:55.788847 | orchestrator | 2026-04-13 04:11:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:11:58.836316 | orchestrator | 2026-04-13 04:11:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:11:58.838139 | orchestrator | 2026-04-13 04:11:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:11:58.838198 | orchestrator | 2026-04-13 04:11:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:12:01.887797 | orchestrator | 2026-04-13 04:12:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:12:01.890223 | orchestrator | 2026-04-13 04:12:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:12:01.890643 | orchestrator | 2026-04-13 04:12:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:12:04.943015 | orchestrator | 2026-04-13 04:12:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:12:04.944646 | orchestrator | 2026-04-13 04:12:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:12:04.944784 | orchestrator | 2026-04-13 04:12:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:12:07.991865 | orchestrator | 2026-04-13 04:12:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:12:07.993426 | orchestrator | 2026-04-13 04:12:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:12:07.993541 | orchestrator | 2026-04-13 04:12:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:12:11.041654 | orchestrator | 2026-04-13 04:12:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:12:11.045924 | orchestrator | 2026-04-13 04:12:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:12:11.046079 | orchestrator | 2026-04-13 04:12:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:12:14.085609 | orchestrator | 2026-04-13 04:12:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:12:14.086589 | orchestrator | 2026-04-13 04:12:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:12:14.086643 | orchestrator | 2026-04-13 04:12:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:12:17.141990 | orchestrator | 2026-04-13 04:12:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:12:18.058584 | orchestrator | 2026-04-13 04:12:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:12:18.058685 | orchestrator | 2026-04-13 04:12:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:12:20.187897 | orchestrator | 2026-04-13 04:12:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:12:20.190811 | orchestrator | 2026-04-13 04:12:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:12:20.190903 | orchestrator | 2026-04-13 04:12:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:12:23.234596 | orchestrator | 2026-04-13 04:12:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:12:23.236767 | orchestrator | 2026-04-13 04:12:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:12:23.236840 | orchestrator | 2026-04-13 04:12:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:12:26.286322 | orchestrator | 2026-04-13 04:12:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:12:26.287672 | orchestrator | 2026-04-13 04:12:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:12:26.287723 | orchestrator | 2026-04-13 04:12:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:12:29.335188 | orchestrator | 2026-04-13 04:12:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:12:29.337539 | orchestrator | 2026-04-13 04:12:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:12:29.337573 | orchestrator | 2026-04-13 04:12:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:12:32.381417 | orchestrator | 2026-04-13 04:12:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:12:32.382489 | orchestrator | 2026-04-13 04:12:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:12:32.382554 | orchestrator | 2026-04-13 04:12:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:12:35.434252 | orchestrator | 2026-04-13 04:12:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:12:35.437095 | orchestrator | 2026-04-13 04:12:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:12:35.437120 | orchestrator | 2026-04-13 04:12:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:12:38.484572 | orchestrator | 2026-04-13 04:12:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:12:38.486561 | orchestrator | 2026-04-13 04:12:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:12:38.486620 | orchestrator | 2026-04-13 04:12:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:12:41.528371 | orchestrator | 2026-04-13 04:12:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:12:41.531327 | orchestrator | 2026-04-13 04:12:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:12:41.531395 | orchestrator | 2026-04-13 04:12:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:12:44.581841 | orchestrator | 2026-04-13 04:12:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:12:44.583763 | orchestrator | 2026-04-13 04:12:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:12:44.583824 | orchestrator | 2026-04-13 04:12:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:12:47.636991 | orchestrator | 2026-04-13 04:12:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:12:47.639371 | orchestrator | 2026-04-13 04:12:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:12:47.639445 | orchestrator | 2026-04-13 04:12:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:12:50.690257 | orchestrator | 2026-04-13 04:12:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:12:50.691583 | orchestrator | 2026-04-13 04:12:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:12:50.691631 | orchestrator | 2026-04-13 04:12:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:12:53.742784 | orchestrator | 2026-04-13 04:12:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:12:53.744922 | orchestrator | 2026-04-13 04:12:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:12:53.744985 | orchestrator | 2026-04-13 04:12:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:12:56.792769 | orchestrator | 2026-04-13 04:12:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:12:56.795958 | orchestrator | 2026-04-13 04:12:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:12:56.796026 | orchestrator | 2026-04-13 04:12:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:12:59.842341 | orchestrator | 2026-04-13 04:12:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:12:59.844034 | orchestrator | 2026-04-13 04:12:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:12:59.844084 | orchestrator | 2026-04-13 04:12:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:13:02.898129 | orchestrator | 2026-04-13 04:13:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:13:02.899999 | orchestrator | 2026-04-13 04:13:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:13:02.900057 | orchestrator | 2026-04-13 04:13:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:13:05.946513 | orchestrator | 2026-04-13 04:13:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:13:05.950593 | orchestrator | 2026-04-13 04:13:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:13:05.950757 | orchestrator | 2026-04-13 04:13:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:13:08.996673 | orchestrator | 2026-04-13 04:13:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:13:08.999084 | orchestrator | 2026-04-13 04:13:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:13:08.999172 | orchestrator | 2026-04-13 04:13:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:13:12.046439 | orchestrator | 2026-04-13 04:13:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:13:12.049179 | orchestrator | 2026-04-13 04:13:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:13:12.049307 | orchestrator | 2026-04-13 04:13:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:13:15.098537 | orchestrator | 2026-04-13 04:13:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:13:15.099399 | orchestrator | 2026-04-13 04:13:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:13:15.099444 | orchestrator | 2026-04-13 04:13:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:13:18.154368 | orchestrator | 2026-04-13 04:13:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:13:18.158181 | orchestrator | 2026-04-13 04:13:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:13:18.158293 | orchestrator | 2026-04-13 04:13:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:13:21.204881 | orchestrator | 2026-04-13 04:13:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:13:21.207383 | orchestrator | 2026-04-13 04:13:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:13:21.207463 | orchestrator | 2026-04-13 04:13:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:13:24.243326 | orchestrator | 2026-04-13 04:13:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:13:24.244283 | orchestrator | 2026-04-13 04:13:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:13:24.244330 | orchestrator | 2026-04-13 04:13:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:13:27.277352 | orchestrator | 2026-04-13 04:13:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:13:27.280180 | orchestrator | 2026-04-13 04:13:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:13:27.280258 | orchestrator | 2026-04-13 04:13:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:13:30.319960 | orchestrator | 2026-04-13 04:13:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:13:30.321149 | orchestrator | 2026-04-13 04:13:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:13:30.321185 | orchestrator | 2026-04-13 04:13:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:13:33.362332 | orchestrator | 2026-04-13 04:13:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:13:33.363852 | orchestrator | 2026-04-13 04:13:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:13:33.363910 | orchestrator | 2026-04-13 04:13:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:13:36.399237 | orchestrator | 2026-04-13 04:13:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:13:36.401642 | orchestrator | 2026-04-13 04:13:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:13:36.401701 | orchestrator | 2026-04-13 04:13:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:13:39.442242 | orchestrator | 2026-04-13 04:13:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:13:39.444025 | orchestrator | 2026-04-13 04:13:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:13:39.444111 | orchestrator | 2026-04-13 04:13:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:13:42.492395 | orchestrator | 2026-04-13 04:13:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:13:42.492971 | orchestrator | 2026-04-13 04:13:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:13:42.493022 | orchestrator | 2026-04-13 04:13:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:13:45.530669 | orchestrator | 2026-04-13 04:13:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:13:45.530768 | orchestrator | 2026-04-13 04:13:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:13:45.530808 | orchestrator | 2026-04-13 04:13:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:13:48.572015 | orchestrator | 2026-04-13 04:13:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:13:48.573608 | orchestrator | 2026-04-13 04:13:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:13:48.573654 | orchestrator | 2026-04-13 04:13:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:13:51.617471 | orchestrator | 2026-04-13 04:13:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:13:51.618735 | orchestrator | 2026-04-13 04:13:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:13:51.618767 | orchestrator | 2026-04-13 04:13:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:13:54.668049 | orchestrator | 2026-04-13 04:13:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:13:54.670375 | orchestrator | 2026-04-13 04:13:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:13:54.670431 | orchestrator | 2026-04-13 04:13:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:13:57.712401 | orchestrator | 2026-04-13 04:13:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:13:57.714949 | orchestrator | 2026-04-13 04:13:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:13:57.715032 | orchestrator | 2026-04-13 04:13:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:14:00.773326 | orchestrator | 2026-04-13 04:14:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:14:00.774212 | orchestrator | 2026-04-13 04:14:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:14:00.774250 | orchestrator | 2026-04-13 04:14:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:14:03.822195 | orchestrator | 2026-04-13 04:14:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:14:03.824238 | orchestrator | 2026-04-13 04:14:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:14:03.824340 | orchestrator | 2026-04-13 04:14:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:14:06.873939 | orchestrator | 2026-04-13 04:14:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:14:06.876319 | orchestrator | 2026-04-13 04:14:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:14:06.876431 | orchestrator | 2026-04-13 04:14:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:14:09.921543 | orchestrator | 2026-04-13 04:14:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:14:09.922532 | orchestrator | 2026-04-13 04:14:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:14:09.922569 | orchestrator | 2026-04-13 04:14:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:14:12.960372 | orchestrator | 2026-04-13 04:14:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:14:12.961786 | orchestrator | 2026-04-13 04:14:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:14:12.961832 | orchestrator | 2026-04-13 04:14:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:14:16.009269 | orchestrator | 2026-04-13 04:14:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:14:16.012518 | orchestrator | 2026-04-13 04:14:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:14:16.012573 | orchestrator | 2026-04-13 04:14:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:14:19.056465 | orchestrator | 2026-04-13 04:14:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:14:19.057575 | orchestrator | 2026-04-13 04:14:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:14:19.057601 | orchestrator | 2026-04-13 04:14:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:14:22.112079 | orchestrator | 2026-04-13 04:14:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:14:22.112839 | orchestrator | 2026-04-13 04:14:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:14:22.112905 | orchestrator | 2026-04-13 04:14:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:14:25.153626 | orchestrator | 2026-04-13 04:14:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:14:25.155483 | orchestrator | 2026-04-13 04:14:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:14:25.155537 | orchestrator | 2026-04-13 04:14:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:14:28.194964 | orchestrator | 2026-04-13 04:14:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:14:28.195789 | orchestrator | 2026-04-13 04:14:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:14:28.195832 | orchestrator | 2026-04-13 04:14:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:14:31.239810 | orchestrator | 2026-04-13 04:14:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:14:31.241863 | orchestrator | 2026-04-13 04:14:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:14:31.241936 | orchestrator | 2026-04-13 04:14:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:14:34.279711 | orchestrator | 2026-04-13 04:14:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:14:34.280498 | orchestrator | 2026-04-13 04:14:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:14:34.280556 | orchestrator | 2026-04-13 04:14:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:14:37.322377 | orchestrator | 2026-04-13 04:14:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:14:37.322544 | orchestrator | 2026-04-13 04:14:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:14:37.322561 | orchestrator | 2026-04-13 04:14:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:14:40.358981 | orchestrator | 2026-04-13 04:14:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:14:40.360564 | orchestrator | 2026-04-13 04:14:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:14:40.360597 | orchestrator | 2026-04-13 04:14:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:14:43.408843 | orchestrator | 2026-04-13 04:14:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:14:43.409835 | orchestrator | 2026-04-13 04:14:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:14:43.409862 | orchestrator | 2026-04-13 04:14:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:14:46.460460 | orchestrator | 2026-04-13 04:14:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:14:46.461626 | orchestrator | 2026-04-13 04:14:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:14:46.461686 | orchestrator | 2026-04-13 04:14:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:14:49.501801 | orchestrator | 2026-04-13 04:14:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:14:49.502942 | orchestrator | 2026-04-13 04:14:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:14:49.502983 | orchestrator | 2026-04-13 04:14:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:14:52.546535 | orchestrator | 2026-04-13 04:14:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:14:52.548904 | orchestrator | 2026-04-13 04:14:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:14:52.548977 | orchestrator | 2026-04-13 04:14:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:14:55.594163 | orchestrator | 2026-04-13 04:14:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:14:55.596600 | orchestrator | 2026-04-13 04:14:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:14:55.596664 | orchestrator | 2026-04-13 04:14:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:14:58.644368 | orchestrator | 2026-04-13 04:14:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:14:58.646252 | orchestrator | 2026-04-13 04:14:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:14:58.646284 | orchestrator | 2026-04-13 04:14:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:15:01.687287 | orchestrator | 2026-04-13 04:15:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:15:01.690488 | orchestrator | 2026-04-13 04:15:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:15:01.690554 | orchestrator | 2026-04-13 04:15:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:15:04.737002 | orchestrator | 2026-04-13 04:15:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:15:04.738731 | orchestrator | 2026-04-13 04:15:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:15:04.738782 | orchestrator | 2026-04-13 04:15:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:15:07.777635 | orchestrator | 2026-04-13 04:15:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:15:07.780264 | orchestrator | 2026-04-13 04:15:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:15:07.780311 | orchestrator | 2026-04-13 04:15:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:15:10.827050 | orchestrator | 2026-04-13 04:15:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:15:10.831608 | orchestrator | 2026-04-13 04:15:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:15:10.831677 | orchestrator | 2026-04-13 04:15:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:15:13.875848 | orchestrator | 2026-04-13 04:15:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:15:13.878849 | orchestrator | 2026-04-13 04:15:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:15:13.878907 | orchestrator | 2026-04-13 04:15:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:15:16.934104 | orchestrator | 2026-04-13 04:15:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:15:16.936920 | orchestrator | 2026-04-13 04:15:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:15:16.936997 | orchestrator | 2026-04-13 04:15:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:15:19.975953 | orchestrator | 2026-04-13 04:15:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:15:19.977769 | orchestrator | 2026-04-13 04:15:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:15:19.978170 | orchestrator | 2026-04-13 04:15:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:15:23.019686 | orchestrator | 2026-04-13 04:15:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:15:23.020197 | orchestrator | 2026-04-13 04:15:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:15:23.020236 | orchestrator | 2026-04-13 04:15:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:15:26.066702 | orchestrator | 2026-04-13 04:15:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:15:26.069376 | orchestrator | 2026-04-13 04:15:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:15:26.069440 | orchestrator | 2026-04-13 04:15:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:15:29.112010 | orchestrator | 2026-04-13 04:15:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:15:29.113731 | orchestrator | 2026-04-13 04:15:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:15:29.113777 | orchestrator | 2026-04-13 04:15:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:15:32.162559 | orchestrator | 2026-04-13 04:15:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:15:32.165029 | orchestrator | 2026-04-13 04:15:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:15:32.165075 | orchestrator | 2026-04-13 04:15:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:15:35.209754 | orchestrator | 2026-04-13 04:15:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:15:35.210939 | orchestrator | 2026-04-13 04:15:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:15:35.210989 | orchestrator | 2026-04-13 04:15:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:15:38.258853 | orchestrator | 2026-04-13 04:15:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:15:38.263006 | orchestrator | 2026-04-13 04:15:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:15:38.263052 | orchestrator | 2026-04-13 04:15:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:15:41.310638 | orchestrator | 2026-04-13 04:15:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:15:41.312057 | orchestrator | 2026-04-13 04:15:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:15:41.312100 | orchestrator | 2026-04-13 04:15:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:15:44.350974 | orchestrator | 2026-04-13 04:15:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:15:44.352258 | orchestrator | 2026-04-13 04:15:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:15:44.352392 | orchestrator | 2026-04-13 04:15:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:15:47.395444 | orchestrator | 2026-04-13 04:15:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:15:47.396410 | orchestrator | 2026-04-13 04:15:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:15:47.396633 | orchestrator | 2026-04-13 04:15:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:15:50.445404 | orchestrator | 2026-04-13 04:15:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:15:50.447939 | orchestrator | 2026-04-13 04:15:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:15:50.448222 | orchestrator | 2026-04-13 04:15:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:15:53.492220 | orchestrator | 2026-04-13 04:15:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:15:53.494743 | orchestrator | 2026-04-13 04:15:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:15:53.494810 | orchestrator | 2026-04-13 04:15:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:15:56.539767 | orchestrator | 2026-04-13 04:15:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:15:56.541388 | orchestrator | 2026-04-13 04:15:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:15:56.541435 | orchestrator | 2026-04-13 04:15:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:15:59.588926 | orchestrator | 2026-04-13 04:15:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:15:59.591655 | orchestrator | 2026-04-13 04:15:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:15:59.591840 | orchestrator | 2026-04-13 04:15:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:16:02.636334 | orchestrator | 2026-04-13 04:16:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:16:02.639801 | orchestrator | 2026-04-13 04:16:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:16:02.639886 | orchestrator | 2026-04-13 04:16:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:16:05.693599 | orchestrator | 2026-04-13 04:16:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:16:05.695931 | orchestrator | 2026-04-13 04:16:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:16:05.695973 | orchestrator | 2026-04-13 04:16:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:16:08.743431 | orchestrator | 2026-04-13 04:16:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:16:08.746772 | orchestrator | 2026-04-13 04:16:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:16:08.746867 | orchestrator | 2026-04-13 04:16:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:16:11.798652 | orchestrator | 2026-04-13 04:16:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:16:11.800329 | orchestrator | 2026-04-13 04:16:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:16:11.800399 | orchestrator | 2026-04-13 04:16:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:16:14.843331 | orchestrator | 2026-04-13 04:16:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:16:14.845138 | orchestrator | 2026-04-13 04:16:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:16:14.845189 | orchestrator | 2026-04-13 04:16:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:16:17.889468 | orchestrator | 2026-04-13 04:16:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:16:17.892064 | orchestrator | 2026-04-13 04:16:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:16:17.892110 | orchestrator | 2026-04-13 04:16:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:16:20.931101 | orchestrator | 2026-04-13 04:16:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:16:20.933672 | orchestrator | 2026-04-13 04:16:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:16:20.933744 | orchestrator | 2026-04-13 04:16:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:16:23.985139 | orchestrator | 2026-04-13 04:16:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:16:23.986678 | orchestrator | 2026-04-13 04:16:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:16:23.986745 | orchestrator | 2026-04-13 04:16:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:16:27.032557 | orchestrator | 2026-04-13 04:16:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:16:27.034521 | orchestrator | 2026-04-13 04:16:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:16:27.034603 | orchestrator | 2026-04-13 04:16:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:16:30.090276 | orchestrator | 2026-04-13 04:16:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:16:30.093197 | orchestrator | 2026-04-13 04:16:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:16:30.093260 | orchestrator | 2026-04-13 04:16:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:16:33.139587 | orchestrator | 2026-04-13 04:16:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:16:33.143342 | orchestrator | 2026-04-13 04:16:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:16:33.143506 | orchestrator | 2026-04-13 04:16:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:16:36.191398 | orchestrator | 2026-04-13 04:16:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:16:36.193853 | orchestrator | 2026-04-13 04:16:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:16:36.193978 | orchestrator | 2026-04-13 04:16:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:16:39.253756 | orchestrator | 2026-04-13 04:16:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:16:39.256856 | orchestrator | 2026-04-13 04:16:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:16:39.256929 | orchestrator | 2026-04-13 04:16:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:16:42.304568 | orchestrator | 2026-04-13 04:16:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:16:42.306262 | orchestrator | 2026-04-13 04:16:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:16:42.306349 | orchestrator | 2026-04-13 04:16:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:16:45.356192 | orchestrator | 2026-04-13 04:16:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:16:45.359853 | orchestrator | 2026-04-13 04:16:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:16:45.359936 | orchestrator | 2026-04-13 04:16:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:16:48.409965 | orchestrator | 2026-04-13 04:16:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:16:48.412637 | orchestrator | 2026-04-13 04:16:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:16:48.412718 | orchestrator | 2026-04-13 04:16:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:16:51.472517 | orchestrator | 2026-04-13 04:16:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:16:51.474820 | orchestrator | 2026-04-13 04:16:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:16:51.474870 | orchestrator | 2026-04-13 04:16:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:16:54.526520 | orchestrator | 2026-04-13 04:16:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:16:54.527723 | orchestrator | 2026-04-13 04:16:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:16:54.527793 | orchestrator | 2026-04-13 04:16:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:16:57.578422 | orchestrator | 2026-04-13 04:16:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:16:57.579836 | orchestrator | 2026-04-13 04:16:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:16:57.579887 | orchestrator | 2026-04-13 04:16:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:17:00.621264 | orchestrator | 2026-04-13 04:17:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:17:00.623143 | orchestrator | 2026-04-13 04:17:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:17:00.623212 | orchestrator | 2026-04-13 04:17:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:17:03.673565 | orchestrator | 2026-04-13 04:17:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:17:03.677482 | orchestrator | 2026-04-13 04:17:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:17:03.677551 | orchestrator | 2026-04-13 04:17:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:17:06.735224 | orchestrator | 2026-04-13 04:17:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:17:06.738676 | orchestrator | 2026-04-13 04:17:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:17:06.738742 | orchestrator | 2026-04-13 04:17:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:17:09.792163 | orchestrator | 2026-04-13 04:17:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:17:09.795445 | orchestrator | 2026-04-13 04:17:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:17:09.795491 | orchestrator | 2026-04-13 04:17:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:17:12.849265 | orchestrator | 2026-04-13 04:17:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:17:12.850879 | orchestrator | 2026-04-13 04:17:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:17:12.850936 | orchestrator | 2026-04-13 04:17:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:17:15.907304 | orchestrator | 2026-04-13 04:17:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:17:15.908259 | orchestrator | 2026-04-13 04:17:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:17:15.908301 | orchestrator | 2026-04-13 04:17:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:17:18.966723 | orchestrator | 2026-04-13 04:17:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:17:18.967995 | orchestrator | 2026-04-13 04:17:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:17:18.968044 | orchestrator | 2026-04-13 04:17:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:17:22.022435 | orchestrator | 2026-04-13 04:17:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:17:22.024850 | orchestrator | 2026-04-13 04:17:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:17:22.024905 | orchestrator | 2026-04-13 04:17:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:17:25.078893 | orchestrator | 2026-04-13 04:17:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:17:25.080667 | orchestrator | 2026-04-13 04:17:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:17:25.080740 | orchestrator | 2026-04-13 04:17:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:17:28.128516 | orchestrator | 2026-04-13 04:17:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:17:28.131705 | orchestrator | 2026-04-13 04:17:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:17:28.131858 | orchestrator | 2026-04-13 04:17:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:17:31.184725 | orchestrator | 2026-04-13 04:17:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:17:31.187140 | orchestrator | 2026-04-13 04:17:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:17:31.187174 | orchestrator | 2026-04-13 04:17:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:17:34.234460 | orchestrator | 2026-04-13 04:17:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:17:34.239099 | orchestrator | 2026-04-13 04:17:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:17:34.239289 | orchestrator | 2026-04-13 04:17:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:17:37.286303 | orchestrator | 2026-04-13 04:17:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:17:37.288324 | orchestrator | 2026-04-13 04:17:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:17:37.288376 | orchestrator | 2026-04-13 04:17:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:17:40.341662 | orchestrator | 2026-04-13 04:17:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:17:40.345836 | orchestrator | 2026-04-13 04:17:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:17:40.345912 | orchestrator | 2026-04-13 04:17:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:17:43.394963 | orchestrator | 2026-04-13 04:17:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:17:43.396925 | orchestrator | 2026-04-13 04:17:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:17:43.397011 | orchestrator | 2026-04-13 04:17:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:17:46.452406 | orchestrator | 2026-04-13 04:17:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:17:46.454601 | orchestrator | 2026-04-13 04:17:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:17:46.454662 | orchestrator | 2026-04-13 04:17:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:17:49.502672 | orchestrator | 2026-04-13 04:17:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:17:49.504603 | orchestrator | 2026-04-13 04:17:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:17:49.504669 | orchestrator | 2026-04-13 04:17:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:17:52.550992 | orchestrator | 2026-04-13 04:17:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:17:52.554178 | orchestrator | 2026-04-13 04:17:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:17:52.554242 | orchestrator | 2026-04-13 04:17:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:17:55.601494 | orchestrator | 2026-04-13 04:17:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:17:55.602967 | orchestrator | 2026-04-13 04:17:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:17:55.603015 | orchestrator | 2026-04-13 04:17:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:17:58.651320 | orchestrator | 2026-04-13 04:17:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:17:58.653441 | orchestrator | 2026-04-13 04:17:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:17:58.653491 | orchestrator | 2026-04-13 04:17:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:18:01.705338 | orchestrator | 2026-04-13 04:18:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:18:01.707201 | orchestrator | 2026-04-13 04:18:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:18:01.707247 | orchestrator | 2026-04-13 04:18:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:18:04.760392 | orchestrator | 2026-04-13 04:18:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:18:04.762234 | orchestrator | 2026-04-13 04:18:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:18:04.762423 | orchestrator | 2026-04-13 04:18:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:18:07.815118 | orchestrator | 2026-04-13 04:18:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:18:07.816743 | orchestrator | 2026-04-13 04:18:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:18:07.816800 | orchestrator | 2026-04-13 04:18:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:18:10.864386 | orchestrator | 2026-04-13 04:18:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:18:10.865354 | orchestrator | 2026-04-13 04:18:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:18:10.865476 | orchestrator | 2026-04-13 04:18:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:18:13.914500 | orchestrator | 2026-04-13 04:18:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:18:13.916376 | orchestrator | 2026-04-13 04:18:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:18:13.916837 | orchestrator | 2026-04-13 04:18:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:18:16.971011 | orchestrator | 2026-04-13 04:18:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:18:16.973247 | orchestrator | 2026-04-13 04:18:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:18:16.973289 | orchestrator | 2026-04-13 04:18:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:18:20.027794 | orchestrator | 2026-04-13 04:18:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:18:20.029383 | orchestrator | 2026-04-13 04:18:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:18:20.029411 | orchestrator | 2026-04-13 04:18:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:18:23.079686 | orchestrator | 2026-04-13 04:18:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:18:23.081055 | orchestrator | 2026-04-13 04:18:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:18:23.081164 | orchestrator | 2026-04-13 04:18:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:18:26.127642 | orchestrator | 2026-04-13 04:18:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:18:26.130248 | orchestrator | 2026-04-13 04:18:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:18:26.130308 | orchestrator | 2026-04-13 04:18:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:18:29.172538 | orchestrator | 2026-04-13 04:18:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:18:29.174552 | orchestrator | 2026-04-13 04:18:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:18:29.174600 | orchestrator | 2026-04-13 04:18:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:18:32.216468 | orchestrator | 2026-04-13 04:18:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:18:32.218009 | orchestrator | 2026-04-13 04:18:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:18:32.218076 | orchestrator | 2026-04-13 04:18:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:18:35.270979 | orchestrator | 2026-04-13 04:18:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:18:35.273062 | orchestrator | 2026-04-13 04:18:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:18:35.273122 | orchestrator | 2026-04-13 04:18:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:18:38.319159 | orchestrator | 2026-04-13 04:18:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:18:38.321278 | orchestrator | 2026-04-13 04:18:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:18:38.321340 | orchestrator | 2026-04-13 04:18:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:18:41.367767 | orchestrator | 2026-04-13 04:18:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:18:41.371165 | orchestrator | 2026-04-13 04:18:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:18:41.371248 | orchestrator | 2026-04-13 04:18:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:18:44.408881 | orchestrator | 2026-04-13 04:18:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:18:44.410133 | orchestrator | 2026-04-13 04:18:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:18:44.410181 | orchestrator | 2026-04-13 04:18:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:18:47.460022 | orchestrator | 2026-04-13 04:18:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:18:47.462629 | orchestrator | 2026-04-13 04:18:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:18:47.462789 | orchestrator | 2026-04-13 04:18:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:18:50.519986 | orchestrator | 2026-04-13 04:18:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:18:50.521928 | orchestrator | 2026-04-13 04:18:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:18:50.521982 | orchestrator | 2026-04-13 04:18:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:18:53.555541 | orchestrator | 2026-04-13 04:18:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:18:53.556451 | orchestrator | 2026-04-13 04:18:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:18:53.556486 | orchestrator | 2026-04-13 04:18:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:18:56.607670 | orchestrator | 2026-04-13 04:18:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:18:56.610163 | orchestrator | 2026-04-13 04:18:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:18:56.610252 | orchestrator | 2026-04-13 04:18:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:18:59.657752 | orchestrator | 2026-04-13 04:18:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:18:59.659498 | orchestrator | 2026-04-13 04:18:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:18:59.659581 | orchestrator | 2026-04-13 04:18:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:19:02.708973 | orchestrator | 2026-04-13 04:19:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:19:02.710234 | orchestrator | 2026-04-13 04:19:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:19:02.710280 | orchestrator | 2026-04-13 04:19:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:19:05.765376 | orchestrator | 2026-04-13 04:19:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:19:05.767165 | orchestrator | 2026-04-13 04:19:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:19:05.767217 | orchestrator | 2026-04-13 04:19:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:19:08.813529 | orchestrator | 2026-04-13 04:19:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:19:08.816251 | orchestrator | 2026-04-13 04:19:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:19:08.816300 | orchestrator | 2026-04-13 04:19:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:19:11.868640 | orchestrator | 2026-04-13 04:19:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:19:11.871118 | orchestrator | 2026-04-13 04:19:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:19:11.871211 | orchestrator | 2026-04-13 04:19:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:19:14.921521 | orchestrator | 2026-04-13 04:19:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:19:14.924364 | orchestrator | 2026-04-13 04:19:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:19:14.924448 | orchestrator | 2026-04-13 04:19:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:19:17.971098 | orchestrator | 2026-04-13 04:19:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:19:17.971395 | orchestrator | 2026-04-13 04:19:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:19:17.971421 | orchestrator | 2026-04-13 04:19:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:19:21.027482 | orchestrator | 2026-04-13 04:19:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:19:21.028477 | orchestrator | 2026-04-13 04:19:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:19:21.028519 | orchestrator | 2026-04-13 04:19:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:19:24.082162 | orchestrator | 2026-04-13 04:19:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:19:24.082914 | orchestrator | 2026-04-13 04:19:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:19:24.082954 | orchestrator | 2026-04-13 04:19:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:19:27.131689 | orchestrator | 2026-04-13 04:19:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:19:27.134223 | orchestrator | 2026-04-13 04:19:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:19:27.134295 | orchestrator | 2026-04-13 04:19:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:19:30.178480 | orchestrator | 2026-04-13 04:19:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:19:30.179119 | orchestrator | 2026-04-13 04:19:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:19:30.179273 | orchestrator | 2026-04-13 04:19:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:19:33.225958 | orchestrator | 2026-04-13 04:19:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:19:33.226523 | orchestrator | 2026-04-13 04:19:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:19:33.226622 | orchestrator | 2026-04-13 04:19:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:19:36.274111 | orchestrator | 2026-04-13 04:19:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:19:36.276072 | orchestrator | 2026-04-13 04:19:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:19:36.276111 | orchestrator | 2026-04-13 04:19:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:19:39.321160 | orchestrator | 2026-04-13 04:19:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:19:39.324128 | orchestrator | 2026-04-13 04:19:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:19:39.324338 | orchestrator | 2026-04-13 04:19:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:19:42.371007 | orchestrator | 2026-04-13 04:19:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:19:42.372647 | orchestrator | 2026-04-13 04:19:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:19:42.372693 | orchestrator | 2026-04-13 04:19:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:19:45.421396 | orchestrator | 2026-04-13 04:19:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:19:45.425357 | orchestrator | 2026-04-13 04:19:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:19:45.425445 | orchestrator | 2026-04-13 04:19:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:19:48.480561 | orchestrator | 2026-04-13 04:19:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:19:48.482725 | orchestrator | 2026-04-13 04:19:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:19:48.482776 | orchestrator | 2026-04-13 04:19:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:19:51.541096 | orchestrator | 2026-04-13 04:19:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:19:51.542507 | orchestrator | 2026-04-13 04:19:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:19:51.542564 | orchestrator | 2026-04-13 04:19:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:19:54.598666 | orchestrator | 2026-04-13 04:19:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:19:54.600458 | orchestrator | 2026-04-13 04:19:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:19:54.600505 | orchestrator | 2026-04-13 04:19:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:19:57.653576 | orchestrator | 2026-04-13 04:19:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:19:57.656315 | orchestrator | 2026-04-13 04:19:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:19:57.656379 | orchestrator | 2026-04-13 04:19:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:20:00.702672 | orchestrator | 2026-04-13 04:20:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:20:00.706111 | orchestrator | 2026-04-13 04:20:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:20:00.706213 | orchestrator | 2026-04-13 04:20:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:20:03.759933 | orchestrator | 2026-04-13 04:20:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:20:03.763190 | orchestrator | 2026-04-13 04:20:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:20:03.763293 | orchestrator | 2026-04-13 04:20:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:20:06.820022 | orchestrator | 2026-04-13 04:20:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:20:06.822003 | orchestrator | 2026-04-13 04:20:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:20:06.822123 | orchestrator | 2026-04-13 04:20:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:20:09.872553 | orchestrator | 2026-04-13 04:20:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:20:09.874623 | orchestrator | 2026-04-13 04:20:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:20:09.874693 | orchestrator | 2026-04-13 04:20:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:20:12.928851 | orchestrator | 2026-04-13 04:20:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:20:12.933167 | orchestrator | 2026-04-13 04:20:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:20:12.933399 | orchestrator | 2026-04-13 04:20:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:20:15.986715 | orchestrator | 2026-04-13 04:20:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:20:15.988021 | orchestrator | 2026-04-13 04:20:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:20:15.988090 | orchestrator | 2026-04-13 04:20:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:20:19.043308 | orchestrator | 2026-04-13 04:20:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:20:19.046677 | orchestrator | 2026-04-13 04:20:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:20:19.046955 | orchestrator | 2026-04-13 04:20:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:20:22.087599 | orchestrator | 2026-04-13 04:20:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:20:22.091106 | orchestrator | 2026-04-13 04:20:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:20:22.091613 | orchestrator | 2026-04-13 04:20:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:20:25.136257 | orchestrator | 2026-04-13 04:20:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:20:25.137835 | orchestrator | 2026-04-13 04:20:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:20:25.137991 | orchestrator | 2026-04-13 04:20:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:20:28.184116 | orchestrator | 2026-04-13 04:20:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:20:28.184831 | orchestrator | 2026-04-13 04:20:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:20:28.184870 | orchestrator | 2026-04-13 04:20:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:20:31.234337 | orchestrator | 2026-04-13 04:20:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:20:31.236595 | orchestrator | 2026-04-13 04:20:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:20:31.236655 | orchestrator | 2026-04-13 04:20:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:20:34.290233 | orchestrator | 2026-04-13 04:20:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:20:34.292422 | orchestrator | 2026-04-13 04:20:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:20:34.292477 | orchestrator | 2026-04-13 04:20:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:20:37.348230 | orchestrator | 2026-04-13 04:20:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:20:37.349273 | orchestrator | 2026-04-13 04:20:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:20:37.349319 | orchestrator | 2026-04-13 04:20:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:20:40.400338 | orchestrator | 2026-04-13 04:20:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:20:40.403228 | orchestrator | 2026-04-13 04:20:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:20:40.403359 | orchestrator | 2026-04-13 04:20:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:20:43.446702 | orchestrator | 2026-04-13 04:20:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:20:43.448888 | orchestrator | 2026-04-13 04:20:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:20:43.449029 | orchestrator | 2026-04-13 04:20:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:20:46.502926 | orchestrator | 2026-04-13 04:20:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:20:46.506223 | orchestrator | 2026-04-13 04:20:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:20:46.506304 | orchestrator | 2026-04-13 04:20:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:20:49.556694 | orchestrator | 2026-04-13 04:20:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:20:49.559090 | orchestrator | 2026-04-13 04:20:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:20:49.559240 | orchestrator | 2026-04-13 04:20:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:20:52.616735 | orchestrator | 2026-04-13 04:20:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:20:52.619297 | orchestrator | 2026-04-13 04:20:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:20:52.619370 | orchestrator | 2026-04-13 04:20:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:20:55.670267 | orchestrator | 2026-04-13 04:20:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:20:55.671684 | orchestrator | 2026-04-13 04:20:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:20:55.671744 | orchestrator | 2026-04-13 04:20:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:20:58.714473 | orchestrator | 2026-04-13 04:20:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:20:58.715732 | orchestrator | 2026-04-13 04:20:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:20:58.715770 | orchestrator | 2026-04-13 04:20:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:21:01.768119 | orchestrator | 2026-04-13 04:21:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:21:01.770975 | orchestrator | 2026-04-13 04:21:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:21:01.771047 | orchestrator | 2026-04-13 04:21:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:21:04.822801 | orchestrator | 2026-04-13 04:21:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:21:04.824705 | orchestrator | 2026-04-13 04:21:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:21:04.824760 | orchestrator | 2026-04-13 04:21:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:21:07.878604 | orchestrator | 2026-04-13 04:21:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:21:07.880096 | orchestrator | 2026-04-13 04:21:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:21:07.880144 | orchestrator | 2026-04-13 04:21:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:21:10.927527 | orchestrator | 2026-04-13 04:21:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:21:10.930247 | orchestrator | 2026-04-13 04:21:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:21:10.930292 | orchestrator | 2026-04-13 04:21:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:21:13.974706 | orchestrator | 2026-04-13 04:21:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:21:13.977659 | orchestrator | 2026-04-13 04:21:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:21:13.977731 | orchestrator | 2026-04-13 04:21:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:21:17.029592 | orchestrator | 2026-04-13 04:21:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:21:17.031141 | orchestrator | 2026-04-13 04:21:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:21:17.031199 | orchestrator | 2026-04-13 04:21:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:21:20.075790 | orchestrator | 2026-04-13 04:21:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:21:20.078197 | orchestrator | 2026-04-13 04:21:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:21:20.078274 | orchestrator | 2026-04-13 04:21:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:21:23.118244 | orchestrator | 2026-04-13 04:21:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:21:23.120260 | orchestrator | 2026-04-13 04:21:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:21:23.120321 | orchestrator | 2026-04-13 04:21:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:21:26.171984 | orchestrator | 2026-04-13 04:21:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:21:26.173995 | orchestrator | 2026-04-13 04:21:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:21:26.174157 | orchestrator | 2026-04-13 04:21:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:21:29.223529 | orchestrator | 2026-04-13 04:21:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:21:29.225045 | orchestrator | 2026-04-13 04:21:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:21:29.225167 | orchestrator | 2026-04-13 04:21:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:21:32.274680 | orchestrator | 2026-04-13 04:21:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:21:32.276278 | orchestrator | 2026-04-13 04:21:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:21:32.276357 | orchestrator | 2026-04-13 04:21:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:21:35.321542 | orchestrator | 2026-04-13 04:21:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:21:35.324701 | orchestrator | 2026-04-13 04:21:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:21:35.324764 | orchestrator | 2026-04-13 04:21:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:21:38.375772 | orchestrator | 2026-04-13 04:21:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:21:38.378434 | orchestrator | 2026-04-13 04:21:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:21:38.378602 | orchestrator | 2026-04-13 04:21:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:21:41.423974 | orchestrator | 2026-04-13 04:21:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:21:41.425871 | orchestrator | 2026-04-13 04:21:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:21:41.425996 | orchestrator | 2026-04-13 04:21:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:21:44.471103 | orchestrator | 2026-04-13 04:21:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:21:44.472112 | orchestrator | 2026-04-13 04:21:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:21:44.472222 | orchestrator | 2026-04-13 04:21:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:21:47.518544 | orchestrator | 2026-04-13 04:21:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:21:47.522395 | orchestrator | 2026-04-13 04:21:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:21:47.522481 | orchestrator | 2026-04-13 04:21:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:21:50.569533 | orchestrator | 2026-04-13 04:21:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:21:50.571283 | orchestrator | 2026-04-13 04:21:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:21:50.571430 | orchestrator | 2026-04-13 04:21:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:21:53.628436 | orchestrator | 2026-04-13 04:21:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:21:53.630262 | orchestrator | 2026-04-13 04:21:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:21:53.630356 | orchestrator | 2026-04-13 04:21:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:21:56.679088 | orchestrator | 2026-04-13 04:21:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:21:56.683372 | orchestrator | 2026-04-13 04:21:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:21:56.683483 | orchestrator | 2026-04-13 04:21:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:21:59.736590 | orchestrator | 2026-04-13 04:21:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:21:59.737963 | orchestrator | 2026-04-13 04:21:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:21:59.738007 | orchestrator | 2026-04-13 04:21:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:22:02.790439 | orchestrator | 2026-04-13 04:22:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:22:02.796342 | orchestrator | 2026-04-13 04:22:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:22:02.796698 | orchestrator | 2026-04-13 04:22:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:22:05.849646 | orchestrator | 2026-04-13 04:22:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:22:05.851295 | orchestrator | 2026-04-13 04:22:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:22:05.851354 | orchestrator | 2026-04-13 04:22:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:22:08.901598 | orchestrator | 2026-04-13 04:22:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:22:08.904546 | orchestrator | 2026-04-13 04:22:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:22:08.904639 | orchestrator | 2026-04-13 04:22:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:22:11.956765 | orchestrator | 2026-04-13 04:22:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:22:11.959387 | orchestrator | 2026-04-13 04:22:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:22:11.959836 | orchestrator | 2026-04-13 04:22:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:22:15.010362 | orchestrator | 2026-04-13 04:22:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:22:15.012957 | orchestrator | 2026-04-13 04:22:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:22:15.013039 | orchestrator | 2026-04-13 04:22:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:22:18.065110 | orchestrator | 2026-04-13 04:22:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:22:18.067786 | orchestrator | 2026-04-13 04:22:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:22:18.067842 | orchestrator | 2026-04-13 04:22:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:22:21.114850 | orchestrator | 2026-04-13 04:22:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:22:21.115982 | orchestrator | 2026-04-13 04:22:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:22:21.116155 | orchestrator | 2026-04-13 04:22:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:22:24.168532 | orchestrator | 2026-04-13 04:22:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:22:24.172156 | orchestrator | 2026-04-13 04:22:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:22:24.172297 | orchestrator | 2026-04-13 04:22:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:22:27.219446 | orchestrator | 2026-04-13 04:22:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:22:27.223013 | orchestrator | 2026-04-13 04:22:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:22:27.223102 | orchestrator | 2026-04-13 04:22:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:22:30.271625 | orchestrator | 2026-04-13 04:22:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:22:30.273698 | orchestrator | 2026-04-13 04:22:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:22:30.273746 | orchestrator | 2026-04-13 04:22:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:22:33.319040 | orchestrator | 2026-04-13 04:22:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:22:33.319874 | orchestrator | 2026-04-13 04:22:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:22:33.319950 | orchestrator | 2026-04-13 04:22:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:22:36.371289 | orchestrator | 2026-04-13 04:22:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:22:36.373532 | orchestrator | 2026-04-13 04:22:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:22:36.373574 | orchestrator | 2026-04-13 04:22:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:22:39.420435 | orchestrator | 2026-04-13 04:22:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:22:39.421347 | orchestrator | 2026-04-13 04:22:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:22:39.421371 | orchestrator | 2026-04-13 04:22:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:22:42.468112 | orchestrator | 2026-04-13 04:22:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:22:42.471374 | orchestrator | 2026-04-13 04:22:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:22:42.471419 | orchestrator | 2026-04-13 04:22:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:22:45.524036 | orchestrator | 2026-04-13 04:22:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:22:45.525211 | orchestrator | 2026-04-13 04:22:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:22:45.525291 | orchestrator | 2026-04-13 04:22:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:22:48.575071 | orchestrator | 2026-04-13 04:22:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:22:48.578324 | orchestrator | 2026-04-13 04:22:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:22:48.578386 | orchestrator | 2026-04-13 04:22:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:22:51.627539 | orchestrator | 2026-04-13 04:22:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:22:51.631163 | orchestrator | 2026-04-13 04:22:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:22:51.631528 | orchestrator | 2026-04-13 04:22:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:22:54.679985 | orchestrator | 2026-04-13 04:22:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:22:54.681226 | orchestrator | 2026-04-13 04:22:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:22:54.681323 | orchestrator | 2026-04-13 04:22:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:22:57.733255 | orchestrator | 2026-04-13 04:22:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:22:57.735491 | orchestrator | 2026-04-13 04:22:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:22:57.735543 | orchestrator | 2026-04-13 04:22:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:23:00.787253 | orchestrator | 2026-04-13 04:23:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:23:00.790194 | orchestrator | 2026-04-13 04:23:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:23:00.790334 | orchestrator | 2026-04-13 04:23:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:23:03.837164 | orchestrator | 2026-04-13 04:23:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:23:03.841352 | orchestrator | 2026-04-13 04:23:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:23:03.841529 | orchestrator | 2026-04-13 04:23:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:23:06.893314 | orchestrator | 2026-04-13 04:23:06 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:23:06.895837 | orchestrator | 2026-04-13 04:23:06 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:23:06.895871 | orchestrator | 2026-04-13 04:23:06 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:23:09.943887 | orchestrator | 2026-04-13 04:23:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:23:09.945518 | orchestrator | 2026-04-13 04:23:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:23:09.945585 | orchestrator | 2026-04-13 04:23:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:23:12.990665 | orchestrator | 2026-04-13 04:23:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:23:12.992723 | orchestrator | 2026-04-13 04:23:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:23:12.992769 | orchestrator | 2026-04-13 04:23:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:23:16.038443 | orchestrator | 2026-04-13 04:23:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:23:16.038572 | orchestrator | 2026-04-13 04:23:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:23:16.038583 | orchestrator | 2026-04-13 04:23:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:23:19.088056 | orchestrator | 2026-04-13 04:23:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:23:19.090684 | orchestrator | 2026-04-13 04:23:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:23:19.090748 | orchestrator | 2026-04-13 04:23:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:23:22.133677 | orchestrator | 2026-04-13 04:23:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:23:22.133772 | orchestrator | 2026-04-13 04:23:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:23:22.133860 | orchestrator | 2026-04-13 04:23:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:23:25.182502 | orchestrator | 2026-04-13 04:23:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:23:25.184850 | orchestrator | 2026-04-13 04:23:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:23:25.184924 | orchestrator | 2026-04-13 04:23:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:23:28.238093 | orchestrator | 2026-04-13 04:23:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:23:28.241693 | orchestrator | 2026-04-13 04:23:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:23:28.241754 | orchestrator | 2026-04-13 04:23:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:23:31.296708 | orchestrator | 2026-04-13 04:23:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:23:31.300120 | orchestrator | 2026-04-13 04:23:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:23:31.300192 | orchestrator | 2026-04-13 04:23:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:23:34.348335 | orchestrator | 2026-04-13 04:23:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:23:34.352117 | orchestrator | 2026-04-13 04:23:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:23:34.352244 | orchestrator | 2026-04-13 04:23:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:23:37.404801 | orchestrator | 2026-04-13 04:23:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:23:37.408001 | orchestrator | 2026-04-13 04:23:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:23:37.408116 | orchestrator | 2026-04-13 04:23:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:23:40.468766 | orchestrator | 2026-04-13 04:23:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:23:40.470459 | orchestrator | 2026-04-13 04:23:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:23:40.470839 | orchestrator | 2026-04-13 04:23:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:23:43.529019 | orchestrator | 2026-04-13 04:23:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:23:43.530539 | orchestrator | 2026-04-13 04:23:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:23:43.530585 | orchestrator | 2026-04-13 04:23:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:23:46.582559 | orchestrator | 2026-04-13 04:23:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:23:46.586189 | orchestrator | 2026-04-13 04:23:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:23:46.586403 | orchestrator | 2026-04-13 04:23:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:23:49.637967 | orchestrator | 2026-04-13 04:23:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:23:49.639222 | orchestrator | 2026-04-13 04:23:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:23:49.639288 | orchestrator | 2026-04-13 04:23:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:23:52.690098 | orchestrator | 2026-04-13 04:23:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:23:52.695052 | orchestrator | 2026-04-13 04:23:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:23:52.695106 | orchestrator | 2026-04-13 04:23:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:23:55.748784 | orchestrator | 2026-04-13 04:23:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:23:55.753297 | orchestrator | 2026-04-13 04:23:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:23:55.753356 | orchestrator | 2026-04-13 04:23:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:23:58.814475 | orchestrator | 2026-04-13 04:23:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:23:58.816708 | orchestrator | 2026-04-13 04:23:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:23:58.816764 | orchestrator | 2026-04-13 04:23:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:24:01.865633 | orchestrator | 2026-04-13 04:24:01 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:24:01.868810 | orchestrator | 2026-04-13 04:24:01 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:24:01.868848 | orchestrator | 2026-04-13 04:24:01 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:24:04.909344 | orchestrator | 2026-04-13 04:24:04 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:24:04.910825 | orchestrator | 2026-04-13 04:24:04 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:24:04.910906 | orchestrator | 2026-04-13 04:24:04 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:24:07.966972 | orchestrator | 2026-04-13 04:24:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:24:07.970499 | orchestrator | 2026-04-13 04:24:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:24:07.970592 | orchestrator | 2026-04-13 04:24:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:24:11.018521 | orchestrator | 2026-04-13 04:24:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:24:11.020733 | orchestrator | 2026-04-13 04:24:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:24:11.020950 | orchestrator | 2026-04-13 04:24:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:24:14.074174 | orchestrator | 2026-04-13 04:24:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:24:14.076153 | orchestrator | 2026-04-13 04:24:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:24:14.076226 | orchestrator | 2026-04-13 04:24:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:24:17.127530 | orchestrator | 2026-04-13 04:24:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:24:17.130191 | orchestrator | 2026-04-13 04:24:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:24:17.130306 | orchestrator | 2026-04-13 04:24:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:24:20.188493 | orchestrator | 2026-04-13 04:24:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:24:20.191049 | orchestrator | 2026-04-13 04:24:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:24:20.191099 | orchestrator | 2026-04-13 04:24:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:24:23.245069 | orchestrator | 2026-04-13 04:24:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:24:23.246711 | orchestrator | 2026-04-13 04:24:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:24:23.246886 | orchestrator | 2026-04-13 04:24:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:24:26.297403 | orchestrator | 2026-04-13 04:24:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:24:26.299027 | orchestrator | 2026-04-13 04:24:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:24:26.299097 | orchestrator | 2026-04-13 04:24:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:24:29.357650 | orchestrator | 2026-04-13 04:24:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:24:29.359234 | orchestrator | 2026-04-13 04:24:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:24:29.359316 | orchestrator | 2026-04-13 04:24:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:24:32.400147 | orchestrator | 2026-04-13 04:24:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:24:32.401826 | orchestrator | 2026-04-13 04:24:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:24:32.401878 | orchestrator | 2026-04-13 04:24:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:24:35.451344 | orchestrator | 2026-04-13 04:24:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:24:35.453220 | orchestrator | 2026-04-13 04:24:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:24:35.453292 | orchestrator | 2026-04-13 04:24:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:24:38.497965 | orchestrator | 2026-04-13 04:24:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:24:38.500697 | orchestrator | 2026-04-13 04:24:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:24:38.500766 | orchestrator | 2026-04-13 04:24:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:24:41.544005 | orchestrator | 2026-04-13 04:24:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:24:41.545361 | orchestrator | 2026-04-13 04:24:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:24:41.545390 | orchestrator | 2026-04-13 04:24:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:24:44.596773 | orchestrator | 2026-04-13 04:24:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:24:44.597459 | orchestrator | 2026-04-13 04:24:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:24:44.597567 | orchestrator | 2026-04-13 04:24:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:24:47.639591 | orchestrator | 2026-04-13 04:24:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:24:47.641865 | orchestrator | 2026-04-13 04:24:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:24:47.641893 | orchestrator | 2026-04-13 04:24:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:24:50.686560 | orchestrator | 2026-04-13 04:24:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:24:50.687378 | orchestrator | 2026-04-13 04:24:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:24:50.687407 | orchestrator | 2026-04-13 04:24:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:24:53.735804 | orchestrator | 2026-04-13 04:24:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:24:53.737700 | orchestrator | 2026-04-13 04:24:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:24:53.737805 | orchestrator | 2026-04-13 04:24:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:24:56.787801 | orchestrator | 2026-04-13 04:24:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:24:56.790177 | orchestrator | 2026-04-13 04:24:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:24:56.790233 | orchestrator | 2026-04-13 04:24:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:24:59.845415 | orchestrator | 2026-04-13 04:24:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:24:59.847986 | orchestrator | 2026-04-13 04:24:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:24:59.848072 | orchestrator | 2026-04-13 04:24:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:25:02.898814 | orchestrator | 2026-04-13 04:25:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:25:02.899867 | orchestrator | 2026-04-13 04:25:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:25:02.899905 | orchestrator | 2026-04-13 04:25:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:25:05.951045 | orchestrator | 2026-04-13 04:25:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:25:05.952762 | orchestrator | 2026-04-13 04:25:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:25:05.952891 | orchestrator | 2026-04-13 04:25:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:25:08.999166 | orchestrator | 2026-04-13 04:25:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:25:09.001128 | orchestrator | 2026-04-13 04:25:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:25:09.001324 | orchestrator | 2026-04-13 04:25:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:25:12.049486 | orchestrator | 2026-04-13 04:25:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:25:12.050984 | orchestrator | 2026-04-13 04:25:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:25:12.051167 | orchestrator | 2026-04-13 04:25:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:25:15.107810 | orchestrator | 2026-04-13 04:25:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:25:15.110229 | orchestrator | 2026-04-13 04:25:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:25:15.110442 | orchestrator | 2026-04-13 04:25:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:25:18.162281 | orchestrator | 2026-04-13 04:25:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:25:18.163728 | orchestrator | 2026-04-13 04:25:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:25:18.163803 | orchestrator | 2026-04-13 04:25:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:25:21.215001 | orchestrator | 2026-04-13 04:25:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:25:21.215778 | orchestrator | 2026-04-13 04:25:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:25:21.215836 | orchestrator | 2026-04-13 04:25:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:25:24.268994 | orchestrator | 2026-04-13 04:25:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:25:24.269487 | orchestrator | 2026-04-13 04:25:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:25:24.269527 | orchestrator | 2026-04-13 04:25:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:25:27.326546 | orchestrator | 2026-04-13 04:25:27 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:25:27.327906 | orchestrator | 2026-04-13 04:25:27 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:25:27.328064 | orchestrator | 2026-04-13 04:25:27 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:25:30.381971 | orchestrator | 2026-04-13 04:25:30 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:25:30.384487 | orchestrator | 2026-04-13 04:25:30 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:25:30.384557 | orchestrator | 2026-04-13 04:25:30 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:25:33.431415 | orchestrator | 2026-04-13 04:25:33 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:25:33.434090 | orchestrator | 2026-04-13 04:25:33 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:25:33.434155 | orchestrator | 2026-04-13 04:25:33 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:25:36.491973 | orchestrator | 2026-04-13 04:25:36 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:25:36.494365 | orchestrator | 2026-04-13 04:25:36 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:25:36.494439 | orchestrator | 2026-04-13 04:25:36 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:25:39.544485 | orchestrator | 2026-04-13 04:25:39 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:25:39.546383 | orchestrator | 2026-04-13 04:25:39 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:25:39.546637 | orchestrator | 2026-04-13 04:25:39 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:25:42.592158 | orchestrator | 2026-04-13 04:25:42 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:25:42.594183 | orchestrator | 2026-04-13 04:25:42 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:25:42.594269 | orchestrator | 2026-04-13 04:25:42 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:25:45.649877 | orchestrator | 2026-04-13 04:25:45 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:25:45.651131 | orchestrator | 2026-04-13 04:25:45 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:25:45.651184 | orchestrator | 2026-04-13 04:25:45 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:25:48.697534 | orchestrator | 2026-04-13 04:25:48 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:25:48.697967 | orchestrator | 2026-04-13 04:25:48 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:25:48.698001 | orchestrator | 2026-04-13 04:25:48 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:25:51.748884 | orchestrator | 2026-04-13 04:25:51 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:25:51.750950 | orchestrator | 2026-04-13 04:25:51 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:25:51.751062 | orchestrator | 2026-04-13 04:25:51 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:25:54.796725 | orchestrator | 2026-04-13 04:25:54 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:25:54.799467 | orchestrator | 2026-04-13 04:25:54 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:25:54.799524 | orchestrator | 2026-04-13 04:25:54 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:25:57.845649 | orchestrator | 2026-04-13 04:25:57 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:25:57.847710 | orchestrator | 2026-04-13 04:25:57 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:25:57.847791 | orchestrator | 2026-04-13 04:25:57 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:26:00.898276 | orchestrator | 2026-04-13 04:26:00 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:26:00.900878 | orchestrator | 2026-04-13 04:26:00 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:26:00.900978 | orchestrator | 2026-04-13 04:26:00 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:26:03.951780 | orchestrator | 2026-04-13 04:26:03 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:26:03.953966 | orchestrator | 2026-04-13 04:26:03 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:26:03.954111 | orchestrator | 2026-04-13 04:26:03 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:26:07.005539 | orchestrator | 2026-04-13 04:26:07 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:26:07.007539 | orchestrator | 2026-04-13 04:26:07 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:26:07.007622 | orchestrator | 2026-04-13 04:26:07 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:26:10.057967 | orchestrator | 2026-04-13 04:26:10 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:26:10.060529 | orchestrator | 2026-04-13 04:26:10 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:26:10.060594 | orchestrator | 2026-04-13 04:26:10 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:26:13.112093 | orchestrator | 2026-04-13 04:26:13 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:26:13.114721 | orchestrator | 2026-04-13 04:26:13 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:26:13.114761 | orchestrator | 2026-04-13 04:26:13 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:28:16.279415 | orchestrator | 2026-04-13 04:28:16 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:28:16.279542 | orchestrator | 2026-04-13 04:28:16 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:28:16.279568 | orchestrator | 2026-04-13 04:28:16 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:28:19.326827 | orchestrator | 2026-04-13 04:28:19 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:28:19.328905 | orchestrator | 2026-04-13 04:28:19 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:28:19.329014 | orchestrator | 2026-04-13 04:28:19 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:28:22.364417 | orchestrator | 2026-04-13 04:28:22 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:28:22.365802 | orchestrator | 2026-04-13 04:28:22 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:28:22.366087 | orchestrator | 2026-04-13 04:28:22 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:28:25.419359 | orchestrator | 2026-04-13 04:28:25 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:28:25.422387 | orchestrator | 2026-04-13 04:28:25 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:28:25.422467 | orchestrator | 2026-04-13 04:28:25 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:28:28.469747 | orchestrator | 2026-04-13 04:28:28 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:28:28.471164 | orchestrator | 2026-04-13 04:28:28 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:28:28.471220 | orchestrator | 2026-04-13 04:28:28 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:28:31.511412 | orchestrator | 2026-04-13 04:28:31 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:28:31.512657 | orchestrator | 2026-04-13 04:28:31 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:28:31.512696 | orchestrator | 2026-04-13 04:28:31 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:28:34.558144 | orchestrator | 2026-04-13 04:28:34 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:28:34.561239 | orchestrator | 2026-04-13 04:28:34 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:28:34.561313 | orchestrator | 2026-04-13 04:28:34 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:28:37.604388 | orchestrator | 2026-04-13 04:28:37 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:28:37.606090 | orchestrator | 2026-04-13 04:28:37 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:28:37.606145 | orchestrator | 2026-04-13 04:28:37 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:28:40.661864 | orchestrator | 2026-04-13 04:28:40 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:28:40.663664 | orchestrator | 2026-04-13 04:28:40 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:28:40.663742 | orchestrator | 2026-04-13 04:28:40 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:28:43.706313 | orchestrator | 2026-04-13 04:28:43 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:28:43.708506 | orchestrator | 2026-04-13 04:28:43 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:28:43.708547 | orchestrator | 2026-04-13 04:28:43 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:28:46.764816 | orchestrator | 2026-04-13 04:28:46 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:28:46.766324 | orchestrator | 2026-04-13 04:28:46 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:28:46.766365 | orchestrator | 2026-04-13 04:28:46 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:28:49.809457 | orchestrator | 2026-04-13 04:28:49 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:28:49.811672 | orchestrator | 2026-04-13 04:28:49 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:28:49.811720 | orchestrator | 2026-04-13 04:28:49 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:28:52.852408 | orchestrator | 2026-04-13 04:28:52 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:28:52.854708 | orchestrator | 2026-04-13 04:28:52 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:28:52.854761 | orchestrator | 2026-04-13 04:28:52 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:28:55.903084 | orchestrator | 2026-04-13 04:28:55 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:28:55.904983 | orchestrator | 2026-04-13 04:28:55 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:28:55.905024 | orchestrator | 2026-04-13 04:28:55 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:28:58.951782 | orchestrator | 2026-04-13 04:28:58 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:28:58.954496 | orchestrator | 2026-04-13 04:28:58 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:28:58.954567 | orchestrator | 2026-04-13 04:28:58 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:29:02.002094 | orchestrator | 2026-04-13 04:29:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:29:02.005016 | orchestrator | 2026-04-13 04:29:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:29:02.005092 | orchestrator | 2026-04-13 04:29:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:29:05.046346 | orchestrator | 2026-04-13 04:29:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:29:05.049082 | orchestrator | 2026-04-13 04:29:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:29:05.049139 | orchestrator | 2026-04-13 04:29:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:29:08.091156 | orchestrator | 2026-04-13 04:29:08 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:29:08.093020 | orchestrator | 2026-04-13 04:29:08 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:29:08.093102 | orchestrator | 2026-04-13 04:29:08 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:29:11.135420 | orchestrator | 2026-04-13 04:29:11 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:29:11.136662 | orchestrator | 2026-04-13 04:29:11 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:29:11.136703 | orchestrator | 2026-04-13 04:29:11 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:29:14.181458 | orchestrator | 2026-04-13 04:29:14 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:29:14.183192 | orchestrator | 2026-04-13 04:29:14 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:29:14.183267 | orchestrator | 2026-04-13 04:29:14 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:29:17.233422 | orchestrator | 2026-04-13 04:29:17 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:29:17.236029 | orchestrator | 2026-04-13 04:29:17 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:29:17.236079 | orchestrator | 2026-04-13 04:29:17 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:29:20.284570 | orchestrator | 2026-04-13 04:29:20 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:29:20.286307 | orchestrator | 2026-04-13 04:29:20 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:29:20.287628 | orchestrator | 2026-04-13 04:29:20 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:29:23.334579 | orchestrator | 2026-04-13 04:29:23 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:29:23.337652 | orchestrator | 2026-04-13 04:29:23 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:29:23.337757 | orchestrator | 2026-04-13 04:29:23 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:29:26.385479 | orchestrator | 2026-04-13 04:29:26 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:29:26.386553 | orchestrator | 2026-04-13 04:29:26 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:29:26.386692 | orchestrator | 2026-04-13 04:29:26 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:29:29.435343 | orchestrator | 2026-04-13 04:29:29 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:29:29.438098 | orchestrator | 2026-04-13 04:29:29 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:29:29.438156 | orchestrator | 2026-04-13 04:29:29 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:29:32.484439 | orchestrator | 2026-04-13 04:29:32 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:29:32.485464 | orchestrator | 2026-04-13 04:29:32 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:29:32.485514 | orchestrator | 2026-04-13 04:29:32 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:29:35.524247 | orchestrator | 2026-04-13 04:29:35 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:29:35.524775 | orchestrator | 2026-04-13 04:29:35 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:29:35.524817 | orchestrator | 2026-04-13 04:29:35 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:29:38.565234 | orchestrator | 2026-04-13 04:29:38 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:29:38.566940 | orchestrator | 2026-04-13 04:29:38 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:29:38.567061 | orchestrator | 2026-04-13 04:29:38 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:29:41.608654 | orchestrator | 2026-04-13 04:29:41 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:29:41.610476 | orchestrator | 2026-04-13 04:29:41 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:29:41.610531 | orchestrator | 2026-04-13 04:29:41 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:29:44.656227 | orchestrator | 2026-04-13 04:29:44 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:29:44.658313 | orchestrator | 2026-04-13 04:29:44 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:29:44.658415 | orchestrator | 2026-04-13 04:29:44 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:29:47.698786 | orchestrator | 2026-04-13 04:29:47 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:29:47.699281 | orchestrator | 2026-04-13 04:29:47 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:29:47.699315 | orchestrator | 2026-04-13 04:29:47 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:29:50.746544 | orchestrator | 2026-04-13 04:29:50 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:29:50.748429 | orchestrator | 2026-04-13 04:29:50 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:29:50.748498 | orchestrator | 2026-04-13 04:29:50 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:29:53.796647 | orchestrator | 2026-04-13 04:29:53 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:29:53.799266 | orchestrator | 2026-04-13 04:29:53 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:29:53.799334 | orchestrator | 2026-04-13 04:29:53 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:29:56.844707 | orchestrator | 2026-04-13 04:29:56 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:29:56.846910 | orchestrator | 2026-04-13 04:29:56 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:29:56.846977 | orchestrator | 2026-04-13 04:29:56 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:29:59.893049 | orchestrator | 2026-04-13 04:29:59 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:29:59.895701 | orchestrator | 2026-04-13 04:29:59 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:29:59.895796 | orchestrator | 2026-04-13 04:29:59 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:30:02.942885 | orchestrator | 2026-04-13 04:30:02 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:30:02.943906 | orchestrator | 2026-04-13 04:30:02 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:30:02.943949 | orchestrator | 2026-04-13 04:30:02 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:30:05.989828 | orchestrator | 2026-04-13 04:30:05 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:30:05.994348 | orchestrator | 2026-04-13 04:30:05 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:30:05.994464 | orchestrator | 2026-04-13 04:30:05 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:30:09.039863 | orchestrator | 2026-04-13 04:30:09 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:30:09.041926 | orchestrator | 2026-04-13 04:30:09 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:30:09.041969 | orchestrator | 2026-04-13 04:30:09 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:30:12.079876 | orchestrator | 2026-04-13 04:30:12 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:30:12.081253 | orchestrator | 2026-04-13 04:30:12 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:30:12.081280 | orchestrator | 2026-04-13 04:30:12 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:30:15.129271 | orchestrator | 2026-04-13 04:30:15 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:30:15.132655 | orchestrator | 2026-04-13 04:30:15 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:30:15.132914 | orchestrator | 2026-04-13 04:30:15 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:30:18.183225 | orchestrator | 2026-04-13 04:30:18 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:30:18.184422 | orchestrator | 2026-04-13 04:30:18 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:30:18.184467 | orchestrator | 2026-04-13 04:30:18 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:30:21.222188 | orchestrator | 2026-04-13 04:30:21 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:30:21.224838 | orchestrator | 2026-04-13 04:30:21 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:30:21.224897 | orchestrator | 2026-04-13 04:30:21 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:30:24.270826 | orchestrator | 2026-04-13 04:30:24 | INFO  | Task d4669e69-7e59-489c-99b4-e1b8031d1e22 is in state STARTED 2026-04-13 04:30:24.271861 | orchestrator | 2026-04-13 04:30:24 | INFO  | Task 566ce848-209b-45fd-8e0a-898310ae30c5 is in state STARTED 2026-04-13 04:30:24.271916 | orchestrator | 2026-04-13 04:30:24 | INFO  | Wait 1 second(s) until the next check 2026-04-13 04:30:25.288141 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-04-13 04:30:25.289995 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-13 04:30:26.085324 | 2026-04-13 04:30:26.085456 | PLAY [Post output play] 2026-04-13 04:30:26.101524 | 2026-04-13 04:30:26.101647 | LOOP [stage-output : Register sources] 2026-04-13 04:30:26.171198 | 2026-04-13 04:30:26.171503 | TASK [stage-output : Check sudo] 2026-04-13 04:30:27.124229 | orchestrator | sudo: a password is required 2026-04-13 04:30:27.208717 | orchestrator | ok: Runtime: 0:00:00.015875 2026-04-13 04:30:27.225472 | 2026-04-13 04:30:27.225658 | LOOP [stage-output : Set source and destination for files and folders] 2026-04-13 04:30:27.262126 | 2026-04-13 04:30:27.262359 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-04-13 04:30:27.336415 | orchestrator | ok 2026-04-13 04:30:27.346211 | 2026-04-13 04:30:27.346350 | LOOP [stage-output : Ensure target folders exist] 2026-04-13 04:30:27.872935 | orchestrator | ok: "docs" 2026-04-13 04:30:27.873302 | 2026-04-13 04:30:28.127019 | orchestrator | ok: "artifacts" 2026-04-13 04:30:28.367157 | orchestrator | ok: "logs" 2026-04-13 04:30:28.384639 | 2026-04-13 04:30:28.384778 | LOOP [stage-output : Copy files and folders to staging folder] 2026-04-13 04:30:28.414401 | 2026-04-13 04:30:28.414712 | TASK [stage-output : Make all log files readable] 2026-04-13 04:30:28.688555 | orchestrator | ok 2026-04-13 04:30:28.694643 | 2026-04-13 04:30:28.694780 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-04-13 04:30:28.730414 | orchestrator | skipping: Conditional result was False 2026-04-13 04:30:28.744576 | 2026-04-13 04:30:28.744716 | TASK [stage-output : Discover log files for compression] 2026-04-13 04:30:28.769000 | orchestrator | skipping: Conditional result was False 2026-04-13 04:30:28.784070 | 2026-04-13 04:30:28.784256 | LOOP [stage-output : Archive everything from logs] 2026-04-13 04:30:28.822951 | 2026-04-13 04:30:28.823255 | PLAY [Post cleanup play] 2026-04-13 04:30:28.841404 | 2026-04-13 04:30:28.841697 | TASK [Set cloud fact (Zuul deployment)] 2026-04-13 04:30:28.884823 | orchestrator | ok 2026-04-13 04:30:28.892513 | 2026-04-13 04:30:28.892609 | TASK [Set cloud fact (local deployment)] 2026-04-13 04:30:28.926791 | orchestrator | skipping: Conditional result was False 2026-04-13 04:30:28.944237 | 2026-04-13 04:30:28.944360 | TASK [Clean the cloud environment] 2026-04-13 04:30:29.565808 | orchestrator | 2026-04-13 04:30:29 - clean up servers 2026-04-13 04:30:30.533380 | orchestrator | 2026-04-13 04:30:30 - testbed-manager 2026-04-13 04:30:30.640442 | orchestrator | 2026-04-13 04:30:30 - testbed-node-4 2026-04-13 04:30:30.736831 | orchestrator | 2026-04-13 04:30:30 - testbed-node-2 2026-04-13 04:30:30.832598 | orchestrator | 2026-04-13 04:30:30 - testbed-node-0 2026-04-13 04:30:30.922119 | orchestrator | 2026-04-13 04:30:30 - testbed-node-3 2026-04-13 04:30:31.015850 | orchestrator | 2026-04-13 04:30:31 - testbed-node-1 2026-04-13 04:30:31.111203 | orchestrator | 2026-04-13 04:30:31 - testbed-node-5 2026-04-13 04:30:31.203442 | orchestrator | 2026-04-13 04:30:31 - clean up keypairs 2026-04-13 04:30:31.223944 | orchestrator | 2026-04-13 04:30:31 - testbed 2026-04-13 04:30:31.247095 | orchestrator | 2026-04-13 04:30:31 - wait for servers to be gone 2026-04-13 04:30:47.160186 | orchestrator | 2026-04-13 04:30:47 - clean up ports 2026-04-13 04:30:47.393042 | orchestrator | 2026-04-13 04:30:47 - 0fd9be42-9555-47e3-8055-1d6ba4097a0e 2026-04-13 04:30:47.677870 | orchestrator | 2026-04-13 04:30:47 - 108560b8-e179-4da8-8646-aad44b7686d8 2026-04-13 04:30:47.918661 | orchestrator | 2026-04-13 04:30:47 - 3df93b35-b7fd-4c83-942a-2afb77904698 2026-04-13 04:30:48.194338 | orchestrator | 2026-04-13 04:30:48 - 7354dad6-e901-406d-b5fb-0f87ec4534c6 2026-04-13 04:30:48.412092 | orchestrator | 2026-04-13 04:30:48 - 83b513d5-443e-4a9f-be68-5df5546b2e9b 2026-04-13 04:30:48.862748 | orchestrator | 2026-04-13 04:30:48 - e331f814-ac8f-4ae9-aff0-c700aa90482e 2026-04-13 04:30:49.108058 | orchestrator | 2026-04-13 04:30:49 - e835b4fb-5b4b-4779-9d4a-be1dd85774ad 2026-04-13 04:30:49.437952 | orchestrator | 2026-04-13 04:30:49 - clean up volumes 2026-04-13 04:30:49.598731 | orchestrator | 2026-04-13 04:30:49 - testbed-volume-0-node-base 2026-04-13 04:30:49.641059 | orchestrator | 2026-04-13 04:30:49 - testbed-volume-1-node-base 2026-04-13 04:30:49.690817 | orchestrator | 2026-04-13 04:30:49 - testbed-volume-4-node-base 2026-04-13 04:30:49.736539 | orchestrator | 2026-04-13 04:30:49 - testbed-volume-5-node-base 2026-04-13 04:30:49.779190 | orchestrator | 2026-04-13 04:30:49 - testbed-volume-3-node-base 2026-04-13 04:30:49.823177 | orchestrator | 2026-04-13 04:30:49 - testbed-volume-2-node-base 2026-04-13 04:30:49.872886 | orchestrator | 2026-04-13 04:30:49 - testbed-volume-manager-base 2026-04-13 04:30:49.916562 | orchestrator | 2026-04-13 04:30:49 - testbed-volume-3-node-3 2026-04-13 04:30:49.966361 | orchestrator | 2026-04-13 04:30:49 - testbed-volume-2-node-5 2026-04-13 04:30:50.016747 | orchestrator | 2026-04-13 04:30:50 - testbed-volume-6-node-3 2026-04-13 04:30:50.060620 | orchestrator | 2026-04-13 04:30:50 - testbed-volume-5-node-5 2026-04-13 04:30:50.108559 | orchestrator | 2026-04-13 04:30:50 - testbed-volume-1-node-4 2026-04-13 04:30:50.152171 | orchestrator | 2026-04-13 04:30:50 - testbed-volume-8-node-5 2026-04-13 04:30:50.196336 | orchestrator | 2026-04-13 04:30:50 - testbed-volume-4-node-4 2026-04-13 04:30:50.238928 | orchestrator | 2026-04-13 04:30:50 - testbed-volume-7-node-4 2026-04-13 04:30:50.284694 | orchestrator | 2026-04-13 04:30:50 - testbed-volume-0-node-3 2026-04-13 04:30:50.333192 | orchestrator | 2026-04-13 04:30:50 - disconnect routers 2026-04-13 04:30:50.490153 | orchestrator | 2026-04-13 04:30:50 - testbed 2026-04-13 04:30:52.124240 | orchestrator | 2026-04-13 04:30:52 - clean up subnets 2026-04-13 04:30:52.176808 | orchestrator | 2026-04-13 04:30:52 - subnet-testbed-management 2026-04-13 04:30:52.394603 | orchestrator | 2026-04-13 04:30:52 - clean up networks 2026-04-13 04:30:52.581795 | orchestrator | 2026-04-13 04:30:52 - net-testbed-management 2026-04-13 04:30:52.956557 | orchestrator | 2026-04-13 04:30:52 - clean up security groups 2026-04-13 04:30:53.000201 | orchestrator | 2026-04-13 04:30:52 - testbed-management 2026-04-13 04:30:53.153719 | orchestrator | 2026-04-13 04:30:53 - testbed-node 2026-04-13 04:30:53.277948 | orchestrator | 2026-04-13 04:30:53 - clean up floating ips 2026-04-13 04:30:53.319901 | orchestrator | 2026-04-13 04:30:53 - 81.163.193.180 2026-04-13 04:30:53.716061 | orchestrator | 2026-04-13 04:30:53 - clean up routers 2026-04-13 04:30:53.822864 | orchestrator | 2026-04-13 04:30:53 - testbed 2026-04-13 04:30:55.006074 | orchestrator | ok: Runtime: 0:00:25.605726 2026-04-13 04:30:55.010694 | 2026-04-13 04:30:55.010943 | PLAY RECAP 2026-04-13 04:30:55.011069 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-04-13 04:30:55.011177 | 2026-04-13 04:30:55.183883 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-13 04:30:55.186765 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-13 04:30:55.952351 | 2026-04-13 04:30:55.952534 | PLAY [Cleanup play] 2026-04-13 04:30:55.969927 | 2026-04-13 04:30:55.970104 | TASK [Set cloud fact (Zuul deployment)] 2026-04-13 04:30:56.039943 | orchestrator | ok 2026-04-13 04:30:56.050152 | 2026-04-13 04:30:56.050349 | TASK [Set cloud fact (local deployment)] 2026-04-13 04:30:56.085906 | orchestrator | skipping: Conditional result was False 2026-04-13 04:30:56.103483 | 2026-04-13 04:30:56.103685 | TASK [Clean the cloud environment] 2026-04-13 04:30:57.267822 | orchestrator | 2026-04-13 04:30:57 - clean up servers 2026-04-13 04:30:57.860872 | orchestrator | 2026-04-13 04:30:57 - clean up keypairs 2026-04-13 04:30:57.880388 | orchestrator | 2026-04-13 04:30:57 - wait for servers to be gone 2026-04-13 04:30:57.926813 | orchestrator | 2026-04-13 04:30:57 - clean up ports 2026-04-13 04:30:58.002682 | orchestrator | 2026-04-13 04:30:58 - clean up volumes 2026-04-13 04:30:58.095530 | orchestrator | 2026-04-13 04:30:58 - disconnect routers 2026-04-13 04:30:58.126213 | orchestrator | 2026-04-13 04:30:58 - clean up subnets 2026-04-13 04:30:58.147942 | orchestrator | 2026-04-13 04:30:58 - clean up networks 2026-04-13 04:30:58.301687 | orchestrator | 2026-04-13 04:30:58 - clean up security groups 2026-04-13 04:30:58.335427 | orchestrator | 2026-04-13 04:30:58 - clean up floating ips 2026-04-13 04:30:58.362873 | orchestrator | 2026-04-13 04:30:58 - clean up routers 2026-04-13 04:30:58.654588 | orchestrator | ok: Runtime: 0:00:01.509004 2026-04-13 04:30:58.658471 | 2026-04-13 04:30:58.658664 | PLAY RECAP 2026-04-13 04:30:58.658920 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-04-13 04:30:58.659002 | 2026-04-13 04:30:58.792888 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-13 04:30:58.793932 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-13 04:30:59.572905 | 2026-04-13 04:30:59.573077 | PLAY [Base post-fetch] 2026-04-13 04:30:59.589040 | 2026-04-13 04:30:59.589186 | TASK [fetch-output : Set log path for multiple nodes] 2026-04-13 04:30:59.647587 | orchestrator | skipping: Conditional result was False 2026-04-13 04:30:59.654623 | 2026-04-13 04:30:59.654890 | TASK [fetch-output : Set log path for single node] 2026-04-13 04:30:59.694900 | orchestrator | ok 2026-04-13 04:30:59.701301 | 2026-04-13 04:30:59.701429 | LOOP [fetch-output : Ensure local output dirs] 2026-04-13 04:31:00.193192 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/f0c0073e0ad3480e915bcf487ee2e865/work/logs" 2026-04-13 04:31:00.466389 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/f0c0073e0ad3480e915bcf487ee2e865/work/artifacts" 2026-04-13 04:31:00.759187 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/f0c0073e0ad3480e915bcf487ee2e865/work/docs" 2026-04-13 04:31:00.783689 | 2026-04-13 04:31:00.783876 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-04-13 04:31:01.737459 | orchestrator | changed: .d..t...... ./ 2026-04-13 04:31:01.737795 | orchestrator | changed: All items complete 2026-04-13 04:31:01.737847 | 2026-04-13 04:31:02.485995 | orchestrator | changed: .d..t...... ./ 2026-04-13 04:31:03.270021 | orchestrator | changed: .d..t...... ./ 2026-04-13 04:31:03.313969 | 2026-04-13 04:31:03.314175 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-04-13 04:31:03.350304 | orchestrator | skipping: Conditional result was False 2026-04-13 04:31:03.353297 | orchestrator | skipping: Conditional result was False 2026-04-13 04:31:03.375094 | 2026-04-13 04:31:03.375205 | PLAY RECAP 2026-04-13 04:31:03.375275 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-04-13 04:31:03.375324 | 2026-04-13 04:31:03.510145 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-13 04:31:03.511248 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-13 04:31:04.298360 | 2026-04-13 04:31:04.298522 | PLAY [Base post] 2026-04-13 04:31:04.314684 | 2026-04-13 04:31:04.315013 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-04-13 04:31:05.289457 | orchestrator | changed 2026-04-13 04:31:05.300306 | 2026-04-13 04:31:05.300442 | PLAY RECAP 2026-04-13 04:31:05.300517 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-13 04:31:05.300588 | 2026-04-13 04:31:05.438701 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-13 04:31:05.439816 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-04-13 04:31:06.320448 | 2026-04-13 04:31:06.320617 | PLAY [Base post-logs] 2026-04-13 04:31:06.332493 | 2026-04-13 04:31:06.332750 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-04-13 04:31:06.809945 | localhost | changed 2026-04-13 04:31:06.823422 | 2026-04-13 04:31:06.823620 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-04-13 04:31:06.851422 | localhost | ok 2026-04-13 04:31:06.855329 | 2026-04-13 04:31:06.855453 | TASK [Set zuul-log-path fact] 2026-04-13 04:31:06.872943 | localhost | ok 2026-04-13 04:31:06.883874 | 2026-04-13 04:31:06.883998 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-13 04:31:06.909248 | localhost | ok 2026-04-13 04:31:06.913242 | 2026-04-13 04:31:06.913372 | TASK [upload-logs : Create log directories] 2026-04-13 04:31:07.449199 | localhost | changed 2026-04-13 04:31:07.452097 | 2026-04-13 04:31:07.452205 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-04-13 04:31:07.972236 | localhost -> localhost | ok: Runtime: 0:00:00.007417 2026-04-13 04:31:07.976902 | 2026-04-13 04:31:07.977026 | TASK [upload-logs : Upload logs to log server] 2026-04-13 04:31:08.560742 | localhost | Output suppressed because no_log was given 2026-04-13 04:31:08.562669 | 2026-04-13 04:31:08.562795 | LOOP [upload-logs : Compress console log and json output] 2026-04-13 04:31:08.612130 | localhost | skipping: Conditional result was False 2026-04-13 04:31:08.618253 | localhost | skipping: Conditional result was False 2026-04-13 04:31:08.631071 | 2026-04-13 04:31:08.631243 | LOOP [upload-logs : Upload compressed console log and json output] 2026-04-13 04:31:08.682908 | localhost | skipping: Conditional result was False 2026-04-13 04:31:08.683682 | 2026-04-13 04:31:08.687311 | localhost | skipping: Conditional result was False 2026-04-13 04:31:08.701756 | 2026-04-13 04:31:08.701949 | LOOP [upload-logs : Upload console log and json output]